00:00:00.001 Started by upstream project "autotest-nightly" build number 3782 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3162 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.096 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.152 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.193 Using shallow fetch with depth 1 00:00:00.193 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.193 > git --version # timeout=10 00:00:00.228 > git --version # 'git version 2.39.2' 00:00:00.228 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.254 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.254 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.143 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.155 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.167 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:09.167 > git config core.sparsecheckout # timeout=10 00:00:09.179 > git read-tree -mu HEAD # timeout=10 00:00:09.198 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:09.216 Commit message: "pool: fixes for VisualBuild class" 00:00:09.216 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:09.296 [Pipeline] Start of Pipeline 00:00:09.312 [Pipeline] library 00:00:09.315 Loading library shm_lib@master 00:00:09.315 Library shm_lib@master is cached. Copying from home. 00:00:09.332 [Pipeline] node 00:00:09.343 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.344 [Pipeline] { 00:00:09.354 [Pipeline] catchError 00:00:09.356 [Pipeline] { 00:00:09.369 [Pipeline] wrap 00:00:09.376 [Pipeline] { 00:00:09.382 [Pipeline] stage 00:00:09.383 [Pipeline] { (Prologue) 00:00:09.544 [Pipeline] sh 00:00:09.833 + logger -p user.info -t JENKINS-CI 00:00:09.854 [Pipeline] echo 00:00:09.857 Node: CYP9 00:00:09.864 [Pipeline] sh 00:00:10.166 [Pipeline] setCustomBuildProperty 00:00:10.178 [Pipeline] echo 00:00:10.180 Cleanup processes 00:00:10.185 [Pipeline] sh 00:00:10.470 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.470 63319 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.483 [Pipeline] sh 00:00:10.766 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.766 ++ grep -v 'sudo pgrep' 00:00:10.766 ++ awk '{print $1}' 00:00:10.766 + sudo kill -9 00:00:10.766 + true 00:00:10.782 [Pipeline] cleanWs 00:00:10.793 [WS-CLEANUP] Deleting project workspace... 00:00:10.793 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.799 [WS-CLEANUP] done 00:00:10.804 [Pipeline] setCustomBuildProperty 00:00:10.819 [Pipeline] sh 00:00:11.101 + sudo git config --global --replace-all safe.directory '*' 00:00:11.179 [Pipeline] nodesByLabel 00:00:11.181 Found a total of 2 nodes with the 'sorcerer' label 00:00:11.192 [Pipeline] httpRequest 00:00:11.197 HttpMethod: GET 00:00:11.198 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:11.201 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:11.223 Response Code: HTTP/1.1 200 OK 00:00:11.223 Success: Status code 200 is in the accepted range: 200,404 00:00:11.224 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:22.273 [Pipeline] sh 00:00:22.560 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:22.578 [Pipeline] httpRequest 00:00:22.583 HttpMethod: GET 00:00:22.584 URL: http://10.211.164.101/packages/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:22.584 Sending request to url: http://10.211.164.101/packages/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:22.600 Response Code: HTTP/1.1 200 OK 00:00:22.601 Success: Status code 200 is in the accepted range: 200,404 00:00:22.601 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:01:06.000 [Pipeline] sh 00:01:06.286 + tar --no-same-owner -xf spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:01:08.845 [Pipeline] sh 00:01:09.140 + git -C spdk log --oneline -n5 00:01:09.140 e55c9a812 vbdev_error: decrement error_num atomically 00:01:09.140 f16e9f4d2 lib/event: framework_get_reactors supports getting pid and tid 00:01:09.140 2d610abe8 lib/env_dpdk: add spdk_get_tid function 00:01:09.140 f470a0dc6 event: do not call reactor events from spdk_thread context 00:01:09.140 8d3fdcaba nvmf: cleanup maximum number of subsystem namespace remanent code 00:01:09.153 [Pipeline] } 00:01:09.170 [Pipeline] // stage 00:01:09.180 [Pipeline] stage 00:01:09.182 [Pipeline] { (Prepare) 00:01:09.201 [Pipeline] writeFile 00:01:09.219 [Pipeline] sh 00:01:09.504 + logger -p user.info -t JENKINS-CI 00:01:09.518 [Pipeline] sh 00:01:09.811 + logger -p user.info -t JENKINS-CI 00:01:09.825 [Pipeline] sh 00:01:10.117 + cat autorun-spdk.conf 00:01:10.117 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.117 SPDK_TEST_NVMF=1 00:01:10.117 SPDK_TEST_NVME_CLI=1 00:01:10.117 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.117 SPDK_TEST_NVMF_NICS=e810 00:01:10.117 SPDK_RUN_UBSAN=1 00:01:10.117 NET_TYPE=phy 00:01:10.126 RUN_NIGHTLY=1 00:01:10.131 [Pipeline] readFile 00:01:10.164 [Pipeline] withEnv 00:01:10.166 [Pipeline] { 00:01:10.181 [Pipeline] sh 00:01:10.467 + set -ex 00:01:10.467 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:10.467 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.467 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.467 ++ SPDK_TEST_NVMF=1 00:01:10.467 ++ SPDK_TEST_NVME_CLI=1 00:01:10.467 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.467 ++ SPDK_TEST_NVMF_NICS=e810 00:01:10.467 ++ SPDK_RUN_UBSAN=1 00:01:10.467 ++ NET_TYPE=phy 00:01:10.467 ++ RUN_NIGHTLY=1 00:01:10.467 + case $SPDK_TEST_NVMF_NICS in 00:01:10.467 + DRIVERS=ice 00:01:10.467 + [[ tcp == \r\d\m\a ]] 00:01:10.467 + [[ -n ice ]] 00:01:10.467 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:10.467 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:10.467 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:10.467 rmmod: ERROR: Module irdma is not currently loaded 00:01:10.467 rmmod: ERROR: Module i40iw is not currently loaded 00:01:10.467 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:10.467 + true 00:01:10.467 + for D in $DRIVERS 00:01:10.467 + sudo modprobe ice 00:01:10.467 + exit 0 00:01:10.478 [Pipeline] } 00:01:10.497 [Pipeline] // withEnv 00:01:10.503 [Pipeline] } 00:01:10.520 [Pipeline] // stage 00:01:10.530 [Pipeline] catchError 00:01:10.532 [Pipeline] { 00:01:10.547 [Pipeline] timeout 00:01:10.547 Timeout set to expire in 50 min 00:01:10.549 [Pipeline] { 00:01:10.565 [Pipeline] stage 00:01:10.567 [Pipeline] { (Tests) 00:01:10.583 [Pipeline] sh 00:01:10.870 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.870 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.870 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.870 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:10.870 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.870 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.870 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:10.870 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.870 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.870 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.870 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:10.870 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.870 + source /etc/os-release 00:01:10.870 ++ NAME='Fedora Linux' 00:01:10.870 ++ VERSION='38 (Cloud Edition)' 00:01:10.870 ++ ID=fedora 00:01:10.870 ++ VERSION_ID=38 00:01:10.870 ++ VERSION_CODENAME= 00:01:10.870 ++ PLATFORM_ID=platform:f38 00:01:10.870 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:10.870 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.870 ++ LOGO=fedora-logo-icon 00:01:10.870 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:10.870 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.870 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:10.870 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.870 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.870 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.870 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:10.870 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.870 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:10.870 ++ SUPPORT_END=2024-05-14 00:01:10.870 ++ VARIANT='Cloud Edition' 00:01:10.870 ++ VARIANT_ID=cloud 00:01:10.870 + uname -a 00:01:10.870 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:10.870 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:14.169 Hugepages 00:01:14.169 node hugesize free / total 00:01:14.169 node0 1048576kB 0 / 0 00:01:14.169 node0 2048kB 0 / 0 00:01:14.169 node1 1048576kB 0 / 0 00:01:14.169 node1 2048kB 0 / 0 00:01:14.169 00:01:14.169 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.169 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:14.169 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:14.169 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:14.169 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:14.169 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:14.169 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:14.170 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:14.170 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:14.170 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:14.170 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:14.170 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:14.170 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:14.170 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:14.170 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:14.170 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:14.170 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:14.170 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:14.170 + rm -f /tmp/spdk-ld-path 00:01:14.170 + source autorun-spdk.conf 00:01:14.170 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.170 ++ SPDK_TEST_NVMF=1 00:01:14.170 ++ SPDK_TEST_NVME_CLI=1 00:01:14.170 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.170 ++ SPDK_TEST_NVMF_NICS=e810 00:01:14.170 ++ SPDK_RUN_UBSAN=1 00:01:14.170 ++ NET_TYPE=phy 00:01:14.170 ++ RUN_NIGHTLY=1 00:01:14.170 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.170 + [[ -n '' ]] 00:01:14.170 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.170 + for M in /var/spdk/build-*-manifest.txt 00:01:14.170 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.170 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.170 + for M in /var/spdk/build-*-manifest.txt 00:01:14.170 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.170 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.170 ++ uname 00:01:14.170 + [[ Linux == \L\i\n\u\x ]] 00:01:14.170 + sudo dmesg -T 00:01:14.170 + sudo dmesg --clear 00:01:14.170 + dmesg_pid=64291 00:01:14.170 + [[ Fedora Linux == FreeBSD ]] 00:01:14.170 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.170 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.170 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.170 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.170 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.170 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.170 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.170 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.170 + sudo dmesg -Tw 00:01:14.170 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.170 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.170 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.170 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.170 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.170 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.170 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.170 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.170 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.170 Test configuration: 00:01:14.170 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.170 SPDK_TEST_NVMF=1 00:01:14.170 SPDK_TEST_NVME_CLI=1 00:01:14.170 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.170 SPDK_TEST_NVMF_NICS=e810 00:01:14.170 SPDK_RUN_UBSAN=1 00:01:14.170 NET_TYPE=phy 00:01:14.170 RUN_NIGHTLY=1 00:26:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:14.170 00:26:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.170 00:26:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.170 00:26:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.170 00:26:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.170 00:26:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.170 00:26:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.170 00:26:32 -- paths/export.sh@5 -- $ export PATH 00:01:14.170 00:26:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.170 00:26:32 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:14.170 00:26:32 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:14.170 00:26:32 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717799192.XXXXXX 00:01:14.170 00:26:32 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717799192.2QDVp4 00:01:14.170 00:26:32 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:14.170 00:26:32 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:14.170 00:26:32 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:14.170 00:26:32 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:14.170 00:26:32 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.170 00:26:32 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:14.170 00:26:32 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:14.170 00:26:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.170 00:26:32 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:14.170 00:26:32 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:14.170 00:26:32 -- pm/common@17 -- $ local monitor 00:01:14.170 00:26:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.170 00:26:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.170 00:26:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.170 00:26:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.170 00:26:32 -- pm/common@25 -- $ sleep 1 00:01:14.170 00:26:32 -- pm/common@21 -- $ date +%s 00:01:14.170 00:26:32 -- pm/common@21 -- $ date +%s 00:01:14.170 00:26:32 -- pm/common@21 -- $ date +%s 00:01:14.170 00:26:32 -- pm/common@21 -- $ date +%s 00:01:14.170 00:26:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717799192 00:01:14.170 00:26:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717799192 00:01:14.170 00:26:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717799192 00:01:14.170 00:26:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717799192 00:01:14.170 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717799192_collect-vmstat.pm.log 00:01:14.170 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717799192_collect-cpu-load.pm.log 00:01:14.170 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717799192_collect-cpu-temp.pm.log 00:01:14.170 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717799192_collect-bmc-pm.bmc.pm.log 00:01:15.113 00:26:33 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:15.113 00:26:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.113 00:26:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.113 00:26:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.113 00:26:33 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.113 Fri Jun 7 10:26:33 PM UTC 2024 00:01:15.113 00:26:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.113 v24.09-pre-53-ge55c9a812 00:01:15.113 00:26:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.113 00:26:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.113 00:26:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.113 00:26:33 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:15.113 00:26:33 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:15.113 00:26:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.113 ************************************ 00:01:15.113 START TEST ubsan 00:01:15.113 ************************************ 00:01:15.113 00:26:33 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:15.113 using ubsan 00:01:15.113 00:01:15.113 real 0m0.000s 00:01:15.113 user 0m0.000s 00:01:15.113 sys 0m0.000s 00:01:15.113 00:26:33 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:15.113 00:26:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.113 ************************************ 00:01:15.113 END TEST ubsan 00:01:15.113 ************************************ 00:01:15.113 00:26:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.113 00:26:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.113 00:26:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.113 00:26:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.113 00:26:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.113 00:26:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.113 00:26:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.113 00:26:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.113 00:26:33 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:15.113 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:15.113 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:15.684 Using 'verbs' RDMA provider 00:01:31.539 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:43.774 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:43.774 Creating mk/config.mk...done. 00:01:43.774 Creating mk/cc.flags.mk...done. 00:01:43.774 Type 'make' to build. 00:01:43.774 00:27:01 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:43.774 00:27:01 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:43.774 00:27:01 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:43.774 00:27:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.774 ************************************ 00:01:43.774 START TEST make 00:01:43.774 ************************************ 00:01:43.774 00:27:01 make -- common/autotest_common.sh@1124 -- $ make -j144 00:01:43.774 make[1]: Nothing to be done for 'all'. 00:01:51.949 The Meson build system 00:01:51.949 Version: 1.3.1 00:01:51.949 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:51.949 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:51.949 Build type: native build 00:01:51.949 Program cat found: YES (/usr/bin/cat) 00:01:51.949 Project name: DPDK 00:01:51.949 Project version: 24.03.0 00:01:51.949 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.949 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.949 Host machine cpu family: x86_64 00:01:51.949 Host machine cpu: x86_64 00:01:51.949 Message: ## Building in Developer Mode ## 00:01:51.949 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.949 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:51.949 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.949 Program python3 found: YES (/usr/bin/python3) 00:01:51.949 Program cat found: YES (/usr/bin/cat) 00:01:51.949 Compiler for C supports arguments -march=native: YES 00:01:51.949 Checking for size of "void *" : 8 00:01:51.949 Checking for size of "void *" : 8 (cached) 00:01:51.949 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:51.949 Library m found: YES 00:01:51.949 Library numa found: YES 00:01:51.949 Has header "numaif.h" : YES 00:01:51.949 Library fdt found: NO 00:01:51.949 Library execinfo found: NO 00:01:51.949 Has header "execinfo.h" : YES 00:01:51.949 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.949 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.949 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.949 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.949 Run-time dependency openssl found: YES 3.0.9 00:01:51.949 Run-time dependency libpcap found: YES 1.10.4 00:01:51.949 Has header "pcap.h" with dependency libpcap: YES 00:01:51.949 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.949 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.949 Compiler for C supports arguments -Wformat: YES 00:01:51.949 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.949 Compiler for C supports arguments -Wformat-security: NO 00:01:51.949 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.949 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.949 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.949 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.949 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.949 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.949 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.949 Compiler for C supports arguments -Wundef: YES 00:01:51.949 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.949 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.949 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.949 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.949 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.949 Program objdump found: YES (/usr/bin/objdump) 00:01:51.949 Compiler for C supports arguments -mavx512f: YES 00:01:51.949 Checking if "AVX512 checking" compiles: YES 00:01:51.949 Fetching value of define "__SSE4_2__" : 1 00:01:51.949 Fetching value of define "__AES__" : 1 00:01:51.949 Fetching value of define "__AVX__" : 1 00:01:51.949 Fetching value of define "__AVX2__" : 1 00:01:51.949 Fetching value of define "__AVX512BW__" : 1 00:01:51.949 Fetching value of define "__AVX512CD__" : 1 00:01:51.949 Fetching value of define "__AVX512DQ__" : 1 00:01:51.949 Fetching value of define "__AVX512F__" : 1 00:01:51.949 Fetching value of define "__AVX512VL__" : 1 00:01:51.949 Fetching value of define "__PCLMUL__" : 1 00:01:51.949 Fetching value of define "__RDRND__" : 1 00:01:51.949 Fetching value of define "__RDSEED__" : 1 00:01:51.949 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:51.949 Fetching value of define "__znver1__" : (undefined) 00:01:51.949 Fetching value of define "__znver2__" : (undefined) 00:01:51.949 Fetching value of define "__znver3__" : (undefined) 00:01:51.949 Fetching value of define "__znver4__" : (undefined) 00:01:51.949 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.949 Message: lib/log: Defining dependency "log" 00:01:51.949 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.949 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.949 Checking for function "getentropy" : NO 00:01:51.949 Message: lib/eal: Defining dependency "eal" 00:01:51.949 Message: lib/ring: Defining dependency "ring" 00:01:51.949 Message: lib/rcu: Defining dependency "rcu" 00:01:51.949 Message: lib/mempool: Defining dependency "mempool" 00:01:51.949 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.949 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.949 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.949 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.949 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.949 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.949 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:51.949 Compiler for C supports arguments -mpclmul: YES 00:01:51.949 Compiler for C supports arguments -maes: YES 00:01:51.949 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.949 Compiler for C supports arguments -mavx512bw: YES 00:01:51.949 Compiler for C supports arguments -mavx512dq: YES 00:01:51.949 Compiler for C supports arguments -mavx512vl: YES 00:01:51.949 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.949 Compiler for C supports arguments -mavx2: YES 00:01:51.949 Compiler for C supports arguments -mavx: YES 00:01:51.949 Message: lib/net: Defining dependency "net" 00:01:51.949 Message: lib/meter: Defining dependency "meter" 00:01:51.949 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.949 Message: lib/pci: Defining dependency "pci" 00:01:51.949 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.949 Message: lib/hash: Defining dependency "hash" 00:01:51.949 Message: lib/timer: Defining dependency "timer" 00:01:51.949 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.949 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.949 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.949 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.949 Message: lib/power: Defining dependency "power" 00:01:51.950 Message: lib/reorder: Defining dependency "reorder" 00:01:51.950 Message: lib/security: Defining dependency "security" 00:01:51.950 Has header "linux/userfaultfd.h" : YES 00:01:51.950 Has header "linux/vduse.h" : YES 00:01:51.950 Message: lib/vhost: Defining dependency "vhost" 00:01:51.950 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.950 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.950 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.950 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.950 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:51.950 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:51.950 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:51.950 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:51.950 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:51.950 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:51.950 Program doxygen found: YES (/usr/bin/doxygen) 00:01:51.950 Configuring doxy-api-html.conf using configuration 00:01:51.950 Configuring doxy-api-man.conf using configuration 00:01:51.950 Program mandb found: YES (/usr/bin/mandb) 00:01:51.950 Program sphinx-build found: NO 00:01:51.950 Configuring rte_build_config.h using configuration 00:01:51.950 Message: 00:01:51.950 ================= 00:01:51.950 Applications Enabled 00:01:51.950 ================= 00:01:51.950 00:01:51.950 apps: 00:01:51.950 00:01:51.950 00:01:51.950 Message: 00:01:51.950 ================= 00:01:51.950 Libraries Enabled 00:01:51.950 ================= 00:01:51.950 00:01:51.950 libs: 00:01:51.950 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:51.950 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:51.950 cryptodev, dmadev, power, reorder, security, vhost, 00:01:51.950 00:01:51.950 Message: 00:01:51.950 =============== 00:01:51.950 Drivers Enabled 00:01:51.950 =============== 00:01:51.950 00:01:51.950 common: 00:01:51.950 00:01:51.950 bus: 00:01:51.950 pci, vdev, 00:01:51.950 mempool: 00:01:51.950 ring, 00:01:51.950 dma: 00:01:51.950 00:01:51.950 net: 00:01:51.950 00:01:51.950 crypto: 00:01:51.950 00:01:51.950 compress: 00:01:51.950 00:01:51.950 vdpa: 00:01:51.950 00:01:51.950 00:01:51.950 Message: 00:01:51.950 ================= 00:01:51.950 Content Skipped 00:01:51.950 ================= 00:01:51.950 00:01:51.950 apps: 00:01:51.950 dumpcap: explicitly disabled via build config 00:01:51.950 graph: explicitly disabled via build config 00:01:51.950 pdump: explicitly disabled via build config 00:01:51.950 proc-info: explicitly disabled via build config 00:01:51.950 test-acl: explicitly disabled via build config 00:01:51.950 test-bbdev: explicitly disabled via build config 00:01:51.950 test-cmdline: explicitly disabled via build config 00:01:51.950 test-compress-perf: explicitly disabled via build config 00:01:51.950 test-crypto-perf: explicitly disabled via build config 00:01:51.950 test-dma-perf: explicitly disabled via build config 00:01:51.950 test-eventdev: explicitly disabled via build config 00:01:51.950 test-fib: explicitly disabled via build config 00:01:51.950 test-flow-perf: explicitly disabled via build config 00:01:51.950 test-gpudev: explicitly disabled via build config 00:01:51.950 test-mldev: explicitly disabled via build config 00:01:51.950 test-pipeline: explicitly disabled via build config 00:01:51.950 test-pmd: explicitly disabled via build config 00:01:51.950 test-regex: explicitly disabled via build config 00:01:51.950 test-sad: explicitly disabled via build config 00:01:51.950 test-security-perf: explicitly disabled via build config 00:01:51.950 00:01:51.950 libs: 00:01:51.950 argparse: explicitly disabled via build config 00:01:51.950 metrics: explicitly disabled via build config 00:01:51.950 acl: explicitly disabled via build config 00:01:51.950 bbdev: explicitly disabled via build config 00:01:51.950 bitratestats: explicitly disabled via build config 00:01:51.950 bpf: explicitly disabled via build config 00:01:51.950 cfgfile: explicitly disabled via build config 00:01:51.950 distributor: explicitly disabled via build config 00:01:51.950 efd: explicitly disabled via build config 00:01:51.950 eventdev: explicitly disabled via build config 00:01:51.950 dispatcher: explicitly disabled via build config 00:01:51.950 gpudev: explicitly disabled via build config 00:01:51.950 gro: explicitly disabled via build config 00:01:51.950 gso: explicitly disabled via build config 00:01:51.950 ip_frag: explicitly disabled via build config 00:01:51.950 jobstats: explicitly disabled via build config 00:01:51.950 latencystats: explicitly disabled via build config 00:01:51.950 lpm: explicitly disabled via build config 00:01:51.950 member: explicitly disabled via build config 00:01:51.950 pcapng: explicitly disabled via build config 00:01:51.950 rawdev: explicitly disabled via build config 00:01:51.950 regexdev: explicitly disabled via build config 00:01:51.950 mldev: explicitly disabled via build config 00:01:51.950 rib: explicitly disabled via build config 00:01:51.950 sched: explicitly disabled via build config 00:01:51.950 stack: explicitly disabled via build config 00:01:51.950 ipsec: explicitly disabled via build config 00:01:51.950 pdcp: explicitly disabled via build config 00:01:51.950 fib: explicitly disabled via build config 00:01:51.950 port: explicitly disabled via build config 00:01:51.950 pdump: explicitly disabled via build config 00:01:51.950 table: explicitly disabled via build config 00:01:51.950 pipeline: explicitly disabled via build config 00:01:51.950 graph: explicitly disabled via build config 00:01:51.950 node: explicitly disabled via build config 00:01:51.950 00:01:51.950 drivers: 00:01:51.950 common/cpt: not in enabled drivers build config 00:01:51.950 common/dpaax: not in enabled drivers build config 00:01:51.950 common/iavf: not in enabled drivers build config 00:01:51.950 common/idpf: not in enabled drivers build config 00:01:51.950 common/ionic: not in enabled drivers build config 00:01:51.950 common/mvep: not in enabled drivers build config 00:01:51.950 common/octeontx: not in enabled drivers build config 00:01:51.950 bus/auxiliary: not in enabled drivers build config 00:01:51.950 bus/cdx: not in enabled drivers build config 00:01:51.950 bus/dpaa: not in enabled drivers build config 00:01:51.950 bus/fslmc: not in enabled drivers build config 00:01:51.950 bus/ifpga: not in enabled drivers build config 00:01:51.950 bus/platform: not in enabled drivers build config 00:01:51.950 bus/uacce: not in enabled drivers build config 00:01:51.950 bus/vmbus: not in enabled drivers build config 00:01:51.950 common/cnxk: not in enabled drivers build config 00:01:51.950 common/mlx5: not in enabled drivers build config 00:01:51.950 common/nfp: not in enabled drivers build config 00:01:51.950 common/nitrox: not in enabled drivers build config 00:01:51.950 common/qat: not in enabled drivers build config 00:01:51.950 common/sfc_efx: not in enabled drivers build config 00:01:51.950 mempool/bucket: not in enabled drivers build config 00:01:51.950 mempool/cnxk: not in enabled drivers build config 00:01:51.950 mempool/dpaa: not in enabled drivers build config 00:01:51.950 mempool/dpaa2: not in enabled drivers build config 00:01:51.950 mempool/octeontx: not in enabled drivers build config 00:01:51.950 mempool/stack: not in enabled drivers build config 00:01:51.950 dma/cnxk: not in enabled drivers build config 00:01:51.950 dma/dpaa: not in enabled drivers build config 00:01:51.950 dma/dpaa2: not in enabled drivers build config 00:01:51.950 dma/hisilicon: not in enabled drivers build config 00:01:51.950 dma/idxd: not in enabled drivers build config 00:01:51.950 dma/ioat: not in enabled drivers build config 00:01:51.950 dma/skeleton: not in enabled drivers build config 00:01:51.950 net/af_packet: not in enabled drivers build config 00:01:51.950 net/af_xdp: not in enabled drivers build config 00:01:51.950 net/ark: not in enabled drivers build config 00:01:51.950 net/atlantic: not in enabled drivers build config 00:01:51.950 net/avp: not in enabled drivers build config 00:01:51.950 net/axgbe: not in enabled drivers build config 00:01:51.950 net/bnx2x: not in enabled drivers build config 00:01:51.950 net/bnxt: not in enabled drivers build config 00:01:51.950 net/bonding: not in enabled drivers build config 00:01:51.950 net/cnxk: not in enabled drivers build config 00:01:51.950 net/cpfl: not in enabled drivers build config 00:01:51.950 net/cxgbe: not in enabled drivers build config 00:01:51.950 net/dpaa: not in enabled drivers build config 00:01:51.951 net/dpaa2: not in enabled drivers build config 00:01:51.951 net/e1000: not in enabled drivers build config 00:01:51.951 net/ena: not in enabled drivers build config 00:01:51.951 net/enetc: not in enabled drivers build config 00:01:51.951 net/enetfec: not in enabled drivers build config 00:01:51.951 net/enic: not in enabled drivers build config 00:01:51.951 net/failsafe: not in enabled drivers build config 00:01:51.951 net/fm10k: not in enabled drivers build config 00:01:51.951 net/gve: not in enabled drivers build config 00:01:51.951 net/hinic: not in enabled drivers build config 00:01:51.951 net/hns3: not in enabled drivers build config 00:01:51.951 net/i40e: not in enabled drivers build config 00:01:51.951 net/iavf: not in enabled drivers build config 00:01:51.951 net/ice: not in enabled drivers build config 00:01:51.951 net/idpf: not in enabled drivers build config 00:01:51.951 net/igc: not in enabled drivers build config 00:01:51.951 net/ionic: not in enabled drivers build config 00:01:51.951 net/ipn3ke: not in enabled drivers build config 00:01:51.951 net/ixgbe: not in enabled drivers build config 00:01:51.951 net/mana: not in enabled drivers build config 00:01:51.951 net/memif: not in enabled drivers build config 00:01:51.951 net/mlx4: not in enabled drivers build config 00:01:51.951 net/mlx5: not in enabled drivers build config 00:01:51.951 net/mvneta: not in enabled drivers build config 00:01:51.951 net/mvpp2: not in enabled drivers build config 00:01:51.951 net/netvsc: not in enabled drivers build config 00:01:51.951 net/nfb: not in enabled drivers build config 00:01:51.951 net/nfp: not in enabled drivers build config 00:01:51.951 net/ngbe: not in enabled drivers build config 00:01:51.951 net/null: not in enabled drivers build config 00:01:51.951 net/octeontx: not in enabled drivers build config 00:01:51.951 net/octeon_ep: not in enabled drivers build config 00:01:51.951 net/pcap: not in enabled drivers build config 00:01:51.951 net/pfe: not in enabled drivers build config 00:01:51.951 net/qede: not in enabled drivers build config 00:01:51.951 net/ring: not in enabled drivers build config 00:01:51.951 net/sfc: not in enabled drivers build config 00:01:51.951 net/softnic: not in enabled drivers build config 00:01:51.951 net/tap: not in enabled drivers build config 00:01:51.951 net/thunderx: not in enabled drivers build config 00:01:51.951 net/txgbe: not in enabled drivers build config 00:01:51.951 net/vdev_netvsc: not in enabled drivers build config 00:01:51.951 net/vhost: not in enabled drivers build config 00:01:51.951 net/virtio: not in enabled drivers build config 00:01:51.951 net/vmxnet3: not in enabled drivers build config 00:01:51.951 raw/*: missing internal dependency, "rawdev" 00:01:51.951 crypto/armv8: not in enabled drivers build config 00:01:51.951 crypto/bcmfs: not in enabled drivers build config 00:01:51.951 crypto/caam_jr: not in enabled drivers build config 00:01:51.951 crypto/ccp: not in enabled drivers build config 00:01:51.951 crypto/cnxk: not in enabled drivers build config 00:01:51.951 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.951 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.951 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.951 crypto/mlx5: not in enabled drivers build config 00:01:51.951 crypto/mvsam: not in enabled drivers build config 00:01:51.951 crypto/nitrox: not in enabled drivers build config 00:01:51.951 crypto/null: not in enabled drivers build config 00:01:51.951 crypto/octeontx: not in enabled drivers build config 00:01:51.951 crypto/openssl: not in enabled drivers build config 00:01:51.951 crypto/scheduler: not in enabled drivers build config 00:01:51.951 crypto/uadk: not in enabled drivers build config 00:01:51.951 crypto/virtio: not in enabled drivers build config 00:01:51.951 compress/isal: not in enabled drivers build config 00:01:51.951 compress/mlx5: not in enabled drivers build config 00:01:51.951 compress/nitrox: not in enabled drivers build config 00:01:51.951 compress/octeontx: not in enabled drivers build config 00:01:51.951 compress/zlib: not in enabled drivers build config 00:01:51.951 regex/*: missing internal dependency, "regexdev" 00:01:51.951 ml/*: missing internal dependency, "mldev" 00:01:51.951 vdpa/ifc: not in enabled drivers build config 00:01:51.951 vdpa/mlx5: not in enabled drivers build config 00:01:51.951 vdpa/nfp: not in enabled drivers build config 00:01:51.951 vdpa/sfc: not in enabled drivers build config 00:01:51.951 event/*: missing internal dependency, "eventdev" 00:01:51.951 baseband/*: missing internal dependency, "bbdev" 00:01:51.951 gpu/*: missing internal dependency, "gpudev" 00:01:51.951 00:01:51.951 00:01:51.951 Build targets in project: 84 00:01:51.951 00:01:51.951 DPDK 24.03.0 00:01:51.951 00:01:51.951 User defined options 00:01:51.951 buildtype : debug 00:01:51.951 default_library : shared 00:01:51.951 libdir : lib 00:01:51.951 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.951 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:51.951 c_link_args : 00:01:51.951 cpu_instruction_set: native 00:01:51.951 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:51.951 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:51.951 enable_docs : false 00:01:51.951 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.951 enable_kmods : false 00:01:51.951 tests : false 00:01:51.951 00:01:51.951 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.228 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:52.228 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:52.228 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:52.228 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:52.228 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:52.228 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.228 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:52.228 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.492 [8/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:52.492 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:52.492 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:52.492 [11/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:52.492 [12/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.492 [13/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.492 [14/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.492 [15/267] Linking static target lib/librte_kvargs.a 00:01:52.492 [16/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.492 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.492 [18/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.492 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.492 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.492 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.492 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.492 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.492 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.492 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.492 [26/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:52.492 [27/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:52.492 [28/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:52.492 [29/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.492 [30/267] Linking static target lib/librte_log.a 00:01:52.492 [31/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.492 [32/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.492 [33/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:52.492 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.492 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.751 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.751 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.751 [38/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.751 [39/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.751 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.751 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.751 [42/267] Linking static target lib/librte_pci.a 00:01:52.751 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.751 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.751 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.751 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.751 [47/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.751 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.751 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.751 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.751 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.751 [52/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.751 [53/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.751 [54/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.751 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.751 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.751 [57/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.751 [58/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.751 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.751 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.751 [61/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.751 [62/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.751 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.751 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.751 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.751 [66/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.751 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.751 [68/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.751 [69/267] Linking static target lib/librte_timer.a 00:01:52.751 [70/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:52.751 [71/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.751 [72/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.751 [73/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:52.751 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.751 [75/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.751 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.751 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.751 [78/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.752 [79/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.752 [80/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.752 [81/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.752 [82/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.752 [83/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.752 [84/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.752 [85/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.752 [86/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.752 [87/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.752 [88/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.752 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.752 [90/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.011 [91/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.011 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:53.011 [93/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:53.011 [94/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.011 [95/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.011 [96/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.011 [97/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.011 [98/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.011 [99/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.011 [100/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.011 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.011 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.011 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.011 [104/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.011 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.011 [106/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.011 [107/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.011 [108/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.011 [109/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.011 [110/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:53.011 [111/267] Linking static target lib/librte_net.a 00:01:53.011 [112/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:53.011 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:53.011 [114/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.011 [115/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:53.011 [116/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.011 [117/267] Linking static target lib/librte_ring.a 00:01:53.011 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:53.011 [119/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.011 [120/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.011 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.011 [122/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.011 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.011 [124/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.011 [125/267] Linking static target lib/librte_telemetry.a 00:01:53.011 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.011 [127/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.011 [128/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.011 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.011 [130/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.011 [131/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.011 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.011 [133/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:53.011 [134/267] Linking static target lib/librte_meter.a 00:01:53.011 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.011 [136/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.011 [137/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.011 [138/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.011 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.012 [140/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.012 [141/267] Linking static target lib/librte_cmdline.a 00:01:53.012 [142/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:53.012 [143/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.012 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.012 [145/267] Linking static target lib/librte_rcu.a 00:01:53.012 [146/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.012 [147/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.012 [148/267] Linking static target lib/librte_mbuf.a 00:01:53.012 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.012 [150/267] Linking static target lib/librte_hash.a 00:01:53.012 [151/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.012 [152/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:53.012 [153/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.272 [154/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:53.272 [155/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:53.272 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.272 [157/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.272 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.272 [159/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.272 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:53.272 [161/267] Linking static target lib/librte_dmadev.a 00:01:53.272 [162/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.272 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.272 [164/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.272 [165/267] Linking static target lib/librte_compressdev.a 00:01:53.272 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:53.272 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.272 [168/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:53.272 [169/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.272 [170/267] Linking static target lib/librte_power.a 00:01:53.272 [171/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.272 [172/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:53.272 [173/267] Linking static target lib/librte_mempool.a 00:01:53.272 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.272 [175/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.272 [176/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.272 [177/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:53.272 [178/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.272 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.272 [180/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.272 [181/267] Linking static target lib/librte_eal.a 00:01:53.272 [182/267] Linking target lib/librte_log.so.24.1 00:01:53.272 [183/267] Linking static target drivers/librte_bus_vdev.a 00:01:53.272 [184/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.272 [185/267] Linking static target lib/librte_reorder.a 00:01:53.272 [186/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.272 [187/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:53.272 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.272 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.272 [190/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:53.272 [191/267] Linking static target lib/librte_security.a 00:01:53.272 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.272 [193/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.272 [194/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.272 [195/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.272 [196/267] Linking static target lib/librte_cryptodev.a 00:01:53.533 [197/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.533 [198/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:53.533 [199/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:53.533 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.533 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.533 [202/267] Linking static target drivers/librte_bus_pci.a 00:01:53.533 [203/267] Linking target lib/librte_kvargs.so.24.1 00:01:53.533 [204/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.533 [205/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:53.533 [206/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.533 [207/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.533 [208/267] Linking static target drivers/librte_mempool_ring.a 00:01:53.533 [209/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:53.533 [210/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.794 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.794 [212/267] Linking target lib/librte_telemetry.so.24.1 00:01:53.794 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.794 [214/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.794 [215/267] Linking static target lib/librte_ethdev.a 00:01:53.794 [216/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:53.794 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.056 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.056 [219/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.056 [220/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.056 [221/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.056 [222/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.056 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.317 [224/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.317 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.317 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.895 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.895 [228/267] Linking static target lib/librte_vhost.a 00:01:55.467 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.381 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.975 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.917 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.917 [233/267] Linking target lib/librte_eal.so.24.1 00:02:05.177 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:05.177 [235/267] Linking target lib/librte_meter.so.24.1 00:02:05.177 [236/267] Linking target lib/librte_ring.so.24.1 00:02:05.177 [237/267] Linking target lib/librte_pci.so.24.1 00:02:05.177 [238/267] Linking target lib/librte_dmadev.so.24.1 00:02:05.177 [239/267] Linking target lib/librte_timer.so.24.1 00:02:05.177 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:05.177 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:05.177 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:05.177 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:05.177 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:05.177 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:05.438 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:05.438 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:05.438 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:05.438 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:05.438 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:05.438 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:05.438 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:05.699 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:05.699 [254/267] Linking target lib/librte_net.so.24.1 00:02:05.699 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:05.699 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:05.699 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:05.960 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:05.960 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:05.960 [260/267] Linking target lib/librte_hash.so.24.1 00:02:05.960 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:05.960 [262/267] Linking target lib/librte_ethdev.so.24.1 00:02:05.960 [263/267] Linking target lib/librte_security.so.24.1 00:02:05.960 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:06.221 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:06.221 [266/267] Linking target lib/librte_power.so.24.1 00:02:06.221 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:06.221 INFO: autodetecting backend as ninja 00:02:06.221 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:07.166 CC lib/log/log.o 00:02:07.428 CC lib/log/log_flags.o 00:02:07.428 CC lib/log/log_deprecated.o 00:02:07.428 CC lib/ut/ut.o 00:02:07.428 CC lib/ut_mock/mock.o 00:02:07.428 LIB libspdk_log.a 00:02:07.428 LIB libspdk_ut.a 00:02:07.428 LIB libspdk_ut_mock.a 00:02:07.428 SO libspdk_ut.so.2.0 00:02:07.428 SO libspdk_log.so.7.0 00:02:07.428 SO libspdk_ut_mock.so.6.0 00:02:07.690 SYMLINK libspdk_ut.so 00:02:07.690 SYMLINK libspdk_ut_mock.so 00:02:07.690 SYMLINK libspdk_log.so 00:02:07.953 CC lib/util/base64.o 00:02:07.953 CC lib/ioat/ioat.o 00:02:07.953 CC lib/util/bit_array.o 00:02:07.953 CC lib/util/cpuset.o 00:02:07.953 CC lib/util/crc16.o 00:02:07.953 CC lib/util/crc32.o 00:02:07.953 CC lib/util/crc32c.o 00:02:07.953 CC lib/util/crc32_ieee.o 00:02:07.953 CC lib/util/crc64.o 00:02:07.953 CC lib/util/dif.o 00:02:07.953 CC lib/util/fd.o 00:02:07.953 CC lib/util/file.o 00:02:07.953 CC lib/util/hexlify.o 00:02:07.953 CXX lib/trace_parser/trace.o 00:02:07.953 CC lib/util/math.o 00:02:07.953 CC lib/util/iov.o 00:02:07.953 CC lib/dma/dma.o 00:02:07.953 CC lib/util/pipe.o 00:02:07.953 CC lib/util/strerror_tls.o 00:02:07.953 CC lib/util/string.o 00:02:07.953 CC lib/util/uuid.o 00:02:07.953 CC lib/util/fd_group.o 00:02:07.953 CC lib/util/xor.o 00:02:07.953 CC lib/util/zipf.o 00:02:08.215 CC lib/vfio_user/host/vfio_user_pci.o 00:02:08.215 CC lib/vfio_user/host/vfio_user.o 00:02:08.215 LIB libspdk_dma.a 00:02:08.215 SO libspdk_dma.so.4.0 00:02:08.215 LIB libspdk_ioat.a 00:02:08.215 SO libspdk_ioat.so.7.0 00:02:08.215 SYMLINK libspdk_dma.so 00:02:08.484 SYMLINK libspdk_ioat.so 00:02:08.484 LIB libspdk_vfio_user.a 00:02:08.484 SO libspdk_vfio_user.so.5.0 00:02:08.484 LIB libspdk_util.a 00:02:08.484 SYMLINK libspdk_vfio_user.so 00:02:08.484 SO libspdk_util.so.9.0 00:02:08.791 SYMLINK libspdk_util.so 00:02:08.791 LIB libspdk_trace_parser.a 00:02:08.791 SO libspdk_trace_parser.so.5.0 00:02:09.051 SYMLINK libspdk_trace_parser.so 00:02:09.051 CC lib/env_dpdk/env.o 00:02:09.051 CC lib/env_dpdk/memory.o 00:02:09.051 CC lib/env_dpdk/pci.o 00:02:09.051 CC lib/env_dpdk/init.o 00:02:09.051 CC lib/env_dpdk/threads.o 00:02:09.051 CC lib/env_dpdk/pci_ioat.o 00:02:09.051 CC lib/vmd/vmd.o 00:02:09.051 CC lib/env_dpdk/pci_virtio.o 00:02:09.051 CC lib/vmd/led.o 00:02:09.051 CC lib/env_dpdk/pci_vmd.o 00:02:09.051 CC lib/env_dpdk/pci_idxd.o 00:02:09.051 CC lib/env_dpdk/pci_event.o 00:02:09.051 CC lib/env_dpdk/sigbus_handler.o 00:02:09.051 CC lib/env_dpdk/pci_dpdk.o 00:02:09.051 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:09.051 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:09.051 CC lib/conf/conf.o 00:02:09.051 CC lib/idxd/idxd.o 00:02:09.051 CC lib/rdma/common.o 00:02:09.051 CC lib/idxd/idxd_user.o 00:02:09.051 CC lib/rdma/rdma_verbs.o 00:02:09.051 CC lib/json/json_parse.o 00:02:09.051 CC lib/idxd/idxd_kernel.o 00:02:09.051 CC lib/json/json_util.o 00:02:09.051 CC lib/json/json_write.o 00:02:09.310 LIB libspdk_conf.a 00:02:09.310 SO libspdk_conf.so.6.0 00:02:09.311 LIB libspdk_rdma.a 00:02:09.311 LIB libspdk_json.a 00:02:09.311 SO libspdk_rdma.so.6.0 00:02:09.311 SYMLINK libspdk_conf.so 00:02:09.311 SO libspdk_json.so.6.0 00:02:09.311 SYMLINK libspdk_rdma.so 00:02:09.571 SYMLINK libspdk_json.so 00:02:09.571 LIB libspdk_idxd.a 00:02:09.571 SO libspdk_idxd.so.12.0 00:02:09.571 LIB libspdk_vmd.a 00:02:09.571 SO libspdk_vmd.so.6.0 00:02:09.571 SYMLINK libspdk_idxd.so 00:02:09.831 SYMLINK libspdk_vmd.so 00:02:09.831 CC lib/jsonrpc/jsonrpc_server.o 00:02:09.831 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:09.831 CC lib/jsonrpc/jsonrpc_client.o 00:02:09.831 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:09.831 LIB libspdk_env_dpdk.a 00:02:09.831 SO libspdk_env_dpdk.so.14.1 00:02:10.092 LIB libspdk_jsonrpc.a 00:02:10.092 SYMLINK libspdk_env_dpdk.so 00:02:10.092 SO libspdk_jsonrpc.so.6.0 00:02:10.092 SYMLINK libspdk_jsonrpc.so 00:02:10.664 CC lib/rpc/rpc.o 00:02:10.664 LIB libspdk_rpc.a 00:02:10.664 SO libspdk_rpc.so.6.0 00:02:10.925 SYMLINK libspdk_rpc.so 00:02:11.187 CC lib/keyring/keyring.o 00:02:11.187 CC lib/trace/trace.o 00:02:11.187 CC lib/keyring/keyring_rpc.o 00:02:11.187 CC lib/trace/trace_flags.o 00:02:11.187 CC lib/trace/trace_rpc.o 00:02:11.187 CC lib/notify/notify.o 00:02:11.187 CC lib/notify/notify_rpc.o 00:02:11.448 LIB libspdk_notify.a 00:02:11.448 SO libspdk_notify.so.6.0 00:02:11.448 LIB libspdk_keyring.a 00:02:11.448 LIB libspdk_trace.a 00:02:11.448 SO libspdk_keyring.so.1.0 00:02:11.448 SYMLINK libspdk_notify.so 00:02:11.448 SO libspdk_trace.so.10.0 00:02:11.448 SYMLINK libspdk_keyring.so 00:02:11.448 SYMLINK libspdk_trace.so 00:02:12.022 CC lib/thread/iobuf.o 00:02:12.022 CC lib/thread/thread.o 00:02:12.022 CC lib/sock/sock.o 00:02:12.022 CC lib/sock/sock_rpc.o 00:02:12.283 LIB libspdk_sock.a 00:02:12.283 SO libspdk_sock.so.9.0 00:02:12.283 SYMLINK libspdk_sock.so 00:02:12.544 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:12.544 CC lib/nvme/nvme_ctrlr.o 00:02:12.544 CC lib/nvme/nvme_fabric.o 00:02:12.544 CC lib/nvme/nvme_ns_cmd.o 00:02:12.544 CC lib/nvme/nvme_ns.o 00:02:12.544 CC lib/nvme/nvme_pcie_common.o 00:02:12.544 CC lib/nvme/nvme_pcie.o 00:02:12.544 CC lib/nvme/nvme_qpair.o 00:02:12.544 CC lib/nvme/nvme.o 00:02:12.544 CC lib/nvme/nvme_quirks.o 00:02:12.544 CC lib/nvme/nvme_transport.o 00:02:12.544 CC lib/nvme/nvme_discovery.o 00:02:12.544 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:12.544 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:12.544 CC lib/nvme/nvme_tcp.o 00:02:12.544 CC lib/nvme/nvme_opal.o 00:02:12.544 CC lib/nvme/nvme_io_msg.o 00:02:12.544 CC lib/nvme/nvme_poll_group.o 00:02:12.544 CC lib/nvme/nvme_zns.o 00:02:12.544 CC lib/nvme/nvme_stubs.o 00:02:12.544 CC lib/nvme/nvme_auth.o 00:02:12.544 CC lib/nvme/nvme_cuse.o 00:02:12.544 CC lib/nvme/nvme_rdma.o 00:02:12.805 LIB libspdk_thread.a 00:02:12.805 SO libspdk_thread.so.10.0 00:02:12.805 SYMLINK libspdk_thread.so 00:02:13.067 CC lib/accel/accel.o 00:02:13.067 CC lib/accel/accel_sw.o 00:02:13.067 CC lib/accel/accel_rpc.o 00:02:13.067 CC lib/blob/blobstore.o 00:02:13.067 CC lib/blob/request.o 00:02:13.067 CC lib/blob/zeroes.o 00:02:13.067 CC lib/blob/blob_bs_dev.o 00:02:13.067 CC lib/init/json_config.o 00:02:13.067 CC lib/init/subsystem.o 00:02:13.328 CC lib/init/subsystem_rpc.o 00:02:13.328 CC lib/init/rpc.o 00:02:13.328 CC lib/virtio/virtio.o 00:02:13.328 CC lib/virtio/virtio_vhost_user.o 00:02:13.328 CC lib/virtio/virtio_vfio_user.o 00:02:13.328 CC lib/virtio/virtio_pci.o 00:02:13.328 LIB libspdk_init.a 00:02:13.591 SO libspdk_init.so.5.0 00:02:13.591 LIB libspdk_virtio.a 00:02:13.591 SO libspdk_virtio.so.7.0 00:02:13.591 SYMLINK libspdk_init.so 00:02:13.591 SYMLINK libspdk_virtio.so 00:02:13.851 CC lib/event/app.o 00:02:13.851 CC lib/event/reactor.o 00:02:13.851 CC lib/event/log_rpc.o 00:02:13.851 CC lib/event/app_rpc.o 00:02:13.851 CC lib/event/scheduler_static.o 00:02:14.113 LIB libspdk_accel.a 00:02:14.113 SO libspdk_accel.so.15.0 00:02:14.113 SYMLINK libspdk_accel.so 00:02:14.113 LIB libspdk_event.a 00:02:14.374 SO libspdk_event.so.13.1 00:02:14.374 SYMLINK libspdk_event.so 00:02:14.374 LIB libspdk_nvme.a 00:02:14.374 CC lib/bdev/bdev.o 00:02:14.374 CC lib/bdev/bdev_rpc.o 00:02:14.374 CC lib/bdev/bdev_zone.o 00:02:14.374 CC lib/bdev/part.o 00:02:14.374 CC lib/bdev/scsi_nvme.o 00:02:14.635 SO libspdk_nvme.so.13.0 00:02:14.897 SYMLINK libspdk_nvme.so 00:02:15.842 LIB libspdk_blob.a 00:02:15.842 SO libspdk_blob.so.11.0 00:02:15.842 SYMLINK libspdk_blob.so 00:02:16.103 CC lib/lvol/lvol.o 00:02:16.103 CC lib/blobfs/blobfs.o 00:02:16.103 CC lib/blobfs/tree.o 00:02:16.675 LIB libspdk_bdev.a 00:02:16.675 SO libspdk_bdev.so.15.0 00:02:16.936 SYMLINK libspdk_bdev.so 00:02:16.936 LIB libspdk_blobfs.a 00:02:16.936 SO libspdk_blobfs.so.10.0 00:02:16.936 LIB libspdk_lvol.a 00:02:16.936 SO libspdk_lvol.so.10.0 00:02:16.936 SYMLINK libspdk_blobfs.so 00:02:17.199 SYMLINK libspdk_lvol.so 00:02:17.199 CC lib/scsi/dev.o 00:02:17.199 CC lib/scsi/lun.o 00:02:17.199 CC lib/scsi/scsi.o 00:02:17.199 CC lib/scsi/port.o 00:02:17.199 CC lib/nvmf/ctrlr.o 00:02:17.199 CC lib/scsi/scsi_pr.o 00:02:17.199 CC lib/nvmf/ctrlr_discovery.o 00:02:17.199 CC lib/scsi/scsi_bdev.o 00:02:17.199 CC lib/ftl/ftl_core.o 00:02:17.199 CC lib/nvmf/ctrlr_bdev.o 00:02:17.199 CC lib/scsi/task.o 00:02:17.199 CC lib/nvmf/subsystem.o 00:02:17.199 CC lib/ftl/ftl_init.o 00:02:17.199 CC lib/scsi/scsi_rpc.o 00:02:17.199 CC lib/nbd/nbd.o 00:02:17.199 CC lib/ftl/ftl_layout.o 00:02:17.199 CC lib/ublk/ublk.o 00:02:17.199 CC lib/nvmf/nvmf.o 00:02:17.199 CC lib/ftl/ftl_debug.o 00:02:17.199 CC lib/nbd/nbd_rpc.o 00:02:17.199 CC lib/ublk/ublk_rpc.o 00:02:17.199 CC lib/ftl/ftl_io.o 00:02:17.199 CC lib/nvmf/nvmf_rpc.o 00:02:17.199 CC lib/nvmf/transport.o 00:02:17.199 CC lib/ftl/ftl_sb.o 00:02:17.199 CC lib/ftl/ftl_l2p.o 00:02:17.199 CC lib/nvmf/tcp.o 00:02:17.199 CC lib/ftl/ftl_l2p_flat.o 00:02:17.199 CC lib/nvmf/stubs.o 00:02:17.199 CC lib/ftl/ftl_nv_cache.o 00:02:17.199 CC lib/nvmf/mdns_server.o 00:02:17.199 CC lib/ftl/ftl_band.o 00:02:17.199 CC lib/nvmf/rdma.o 00:02:17.199 CC lib/ftl/ftl_band_ops.o 00:02:17.199 CC lib/nvmf/auth.o 00:02:17.199 CC lib/ftl/ftl_writer.o 00:02:17.199 CC lib/ftl/ftl_reloc.o 00:02:17.199 CC lib/ftl/ftl_rq.o 00:02:17.199 CC lib/ftl/ftl_l2p_cache.o 00:02:17.199 CC lib/ftl/ftl_p2l.o 00:02:17.199 CC lib/ftl/mngt/ftl_mngt.o 00:02:17.199 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:17.200 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:17.200 CC lib/ftl/utils/ftl_conf.o 00:02:17.200 CC lib/ftl/utils/ftl_md.o 00:02:17.200 CC lib/ftl/utils/ftl_mempool.o 00:02:17.200 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:17.200 CC lib/ftl/utils/ftl_bitmap.o 00:02:17.200 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:17.200 CC lib/ftl/utils/ftl_property.o 00:02:17.200 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:17.200 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:17.200 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:17.200 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:17.200 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:17.200 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:17.200 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:17.200 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:17.200 CC lib/ftl/base/ftl_base_bdev.o 00:02:17.200 CC lib/ftl/ftl_trace.o 00:02:17.200 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:17.200 CC lib/ftl/base/ftl_base_dev.o 00:02:17.769 LIB libspdk_nbd.a 00:02:17.769 SO libspdk_nbd.so.7.0 00:02:17.769 SYMLINK libspdk_nbd.so 00:02:17.769 LIB libspdk_scsi.a 00:02:17.769 SO libspdk_scsi.so.9.0 00:02:18.031 LIB libspdk_ublk.a 00:02:18.031 SYMLINK libspdk_scsi.so 00:02:18.031 SO libspdk_ublk.so.3.0 00:02:18.031 SYMLINK libspdk_ublk.so 00:02:18.292 LIB libspdk_ftl.a 00:02:18.292 CC lib/vhost/vhost.o 00:02:18.292 CC lib/vhost/vhost_rpc.o 00:02:18.292 CC lib/vhost/vhost_scsi.o 00:02:18.292 CC lib/vhost/vhost_blk.o 00:02:18.292 CC lib/vhost/rte_vhost_user.o 00:02:18.292 CC lib/iscsi/conn.o 00:02:18.292 CC lib/iscsi/init_grp.o 00:02:18.292 CC lib/iscsi/iscsi.o 00:02:18.292 CC lib/iscsi/md5.o 00:02:18.292 CC lib/iscsi/param.o 00:02:18.292 CC lib/iscsi/portal_grp.o 00:02:18.292 CC lib/iscsi/tgt_node.o 00:02:18.292 CC lib/iscsi/iscsi_subsystem.o 00:02:18.292 CC lib/iscsi/iscsi_rpc.o 00:02:18.292 CC lib/iscsi/task.o 00:02:18.292 SO libspdk_ftl.so.9.0 00:02:18.867 SYMLINK libspdk_ftl.so 00:02:19.127 LIB libspdk_nvmf.a 00:02:19.127 SO libspdk_nvmf.so.18.1 00:02:19.127 LIB libspdk_vhost.a 00:02:19.389 SO libspdk_vhost.so.8.0 00:02:19.389 SYMLINK libspdk_nvmf.so 00:02:19.389 SYMLINK libspdk_vhost.so 00:02:19.389 LIB libspdk_iscsi.a 00:02:19.650 SO libspdk_iscsi.so.8.0 00:02:19.650 SYMLINK libspdk_iscsi.so 00:02:20.221 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.482 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.482 CC module/sock/posix/posix.o 00:02:20.482 CC module/blob/bdev/blob_bdev.o 00:02:20.482 LIB libspdk_env_dpdk_rpc.a 00:02:20.482 CC module/accel/iaa/accel_iaa.o 00:02:20.482 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.482 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.482 CC module/accel/error/accel_error.o 00:02:20.482 CC module/keyring/linux/keyring.o 00:02:20.482 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.482 CC module/accel/error/accel_error_rpc.o 00:02:20.482 CC module/keyring/linux/keyring_rpc.o 00:02:20.482 CC module/keyring/file/keyring.o 00:02:20.482 CC module/keyring/file/keyring_rpc.o 00:02:20.482 CC module/accel/ioat/accel_ioat.o 00:02:20.482 CC module/accel/dsa/accel_dsa.o 00:02:20.482 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.482 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.482 SO libspdk_env_dpdk_rpc.so.6.0 00:02:20.482 SYMLINK libspdk_env_dpdk_rpc.so 00:02:20.482 LIB libspdk_keyring_linux.a 00:02:20.482 LIB libspdk_scheduler_dynamic.a 00:02:20.482 LIB libspdk_scheduler_gscheduler.a 00:02:20.482 LIB libspdk_keyring_file.a 00:02:20.744 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.744 SO libspdk_keyring_linux.so.1.0 00:02:20.744 LIB libspdk_accel_error.a 00:02:20.744 LIB libspdk_accel_ioat.a 00:02:20.744 SO libspdk_scheduler_dynamic.so.4.0 00:02:20.744 SO libspdk_scheduler_gscheduler.so.4.0 00:02:20.744 LIB libspdk_accel_iaa.a 00:02:20.744 SO libspdk_keyring_file.so.1.0 00:02:20.744 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:20.744 LIB libspdk_blob_bdev.a 00:02:20.744 SO libspdk_accel_error.so.2.0 00:02:20.744 SO libspdk_accel_ioat.so.6.0 00:02:20.744 SO libspdk_accel_iaa.so.3.0 00:02:20.744 LIB libspdk_accel_dsa.a 00:02:20.744 SO libspdk_blob_bdev.so.11.0 00:02:20.744 SYMLINK libspdk_keyring_linux.so 00:02:20.744 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.744 SYMLINK libspdk_scheduler_gscheduler.so 00:02:20.744 SYMLINK libspdk_keyring_file.so 00:02:20.744 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:20.744 SO libspdk_accel_dsa.so.5.0 00:02:20.744 SYMLINK libspdk_accel_error.so 00:02:20.744 SYMLINK libspdk_accel_ioat.so 00:02:20.744 SYMLINK libspdk_blob_bdev.so 00:02:20.744 SYMLINK libspdk_accel_iaa.so 00:02:20.744 SYMLINK libspdk_accel_dsa.so 00:02:21.006 LIB libspdk_sock_posix.a 00:02:21.006 SO libspdk_sock_posix.so.6.0 00:02:21.267 SYMLINK libspdk_sock_posix.so 00:02:21.267 CC module/blobfs/bdev/blobfs_bdev.o 00:02:21.267 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:21.267 CC module/bdev/error/vbdev_error.o 00:02:21.267 CC module/bdev/error/vbdev_error_rpc.o 00:02:21.267 CC module/bdev/malloc/bdev_malloc.o 00:02:21.267 CC module/bdev/gpt/gpt.o 00:02:21.267 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.267 CC module/bdev/gpt/vbdev_gpt.o 00:02:21.267 CC module/bdev/split/vbdev_split.o 00:02:21.267 CC module/bdev/delay/vbdev_delay.o 00:02:21.267 CC module/bdev/split/vbdev_split_rpc.o 00:02:21.267 CC module/bdev/ftl/bdev_ftl.o 00:02:21.267 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:21.267 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.267 CC module/bdev/lvol/vbdev_lvol.o 00:02:21.267 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:21.267 CC module/bdev/null/bdev_null.o 00:02:21.267 CC module/bdev/null/bdev_null_rpc.o 00:02:21.267 CC module/bdev/passthru/vbdev_passthru.o 00:02:21.267 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:21.267 CC module/bdev/nvme/bdev_nvme.o 00:02:21.267 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:21.267 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:21.267 CC module/bdev/nvme/nvme_rpc.o 00:02:21.267 CC module/bdev/nvme/bdev_mdns_client.o 00:02:21.267 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:21.267 CC module/bdev/nvme/vbdev_opal.o 00:02:21.267 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:21.267 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:21.267 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:21.267 CC module/bdev/raid/bdev_raid.o 00:02:21.267 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:21.267 CC module/bdev/aio/bdev_aio.o 00:02:21.267 CC module/bdev/iscsi/bdev_iscsi.o 00:02:21.267 CC module/bdev/raid/bdev_raid_rpc.o 00:02:21.267 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:21.267 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:21.267 CC module/bdev/aio/bdev_aio_rpc.o 00:02:21.267 CC module/bdev/raid/bdev_raid_sb.o 00:02:21.267 CC module/bdev/raid/raid0.o 00:02:21.267 CC module/bdev/raid/raid1.o 00:02:21.267 CC module/bdev/raid/concat.o 00:02:21.527 LIB libspdk_blobfs_bdev.a 00:02:21.527 LIB libspdk_bdev_gpt.a 00:02:21.527 SO libspdk_blobfs_bdev.so.6.0 00:02:21.527 SO libspdk_bdev_gpt.so.6.0 00:02:21.527 LIB libspdk_bdev_error.a 00:02:21.527 LIB libspdk_bdev_null.a 00:02:21.527 LIB libspdk_bdev_split.a 00:02:21.527 SO libspdk_bdev_null.so.6.0 00:02:21.527 SO libspdk_bdev_error.so.6.0 00:02:21.527 LIB libspdk_bdev_passthru.a 00:02:21.527 SYMLINK libspdk_blobfs_bdev.so 00:02:21.527 LIB libspdk_bdev_ftl.a 00:02:21.527 SYMLINK libspdk_bdev_gpt.so 00:02:21.527 SO libspdk_bdev_split.so.6.0 00:02:21.527 SO libspdk_bdev_passthru.so.6.0 00:02:21.527 SYMLINK libspdk_bdev_null.so 00:02:21.527 LIB libspdk_bdev_aio.a 00:02:21.527 SYMLINK libspdk_bdev_error.so 00:02:21.789 SO libspdk_bdev_ftl.so.6.0 00:02:21.789 LIB libspdk_bdev_delay.a 00:02:21.789 LIB libspdk_bdev_zone_block.a 00:02:21.789 LIB libspdk_bdev_malloc.a 00:02:21.789 SO libspdk_bdev_aio.so.6.0 00:02:21.789 SYMLINK libspdk_bdev_split.so 00:02:21.789 SYMLINK libspdk_bdev_passthru.so 00:02:21.789 LIB libspdk_bdev_iscsi.a 00:02:21.789 SO libspdk_bdev_malloc.so.6.0 00:02:21.789 SO libspdk_bdev_zone_block.so.6.0 00:02:21.789 SO libspdk_bdev_delay.so.6.0 00:02:21.789 SYMLINK libspdk_bdev_ftl.so 00:02:21.789 SYMLINK libspdk_bdev_aio.so 00:02:21.789 SO libspdk_bdev_iscsi.so.6.0 00:02:21.789 LIB libspdk_bdev_lvol.a 00:02:21.789 SYMLINK libspdk_bdev_zone_block.so 00:02:21.789 SYMLINK libspdk_bdev_delay.so 00:02:21.789 SYMLINK libspdk_bdev_malloc.so 00:02:21.789 LIB libspdk_bdev_virtio.a 00:02:21.789 SYMLINK libspdk_bdev_iscsi.so 00:02:21.789 SO libspdk_bdev_lvol.so.6.0 00:02:21.789 SO libspdk_bdev_virtio.so.6.0 00:02:21.789 SYMLINK libspdk_bdev_lvol.so 00:02:22.050 SYMLINK libspdk_bdev_virtio.so 00:02:22.050 LIB libspdk_bdev_raid.a 00:02:22.311 SO libspdk_bdev_raid.so.6.0 00:02:22.311 SYMLINK libspdk_bdev_raid.so 00:02:23.329 LIB libspdk_bdev_nvme.a 00:02:23.329 SO libspdk_bdev_nvme.so.7.0 00:02:23.329 SYMLINK libspdk_bdev_nvme.so 00:02:24.273 CC module/event/subsystems/keyring/keyring.o 00:02:24.273 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.273 CC module/event/subsystems/vmd/vmd.o 00:02:24.273 CC module/event/subsystems/sock/sock.o 00:02:24.273 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.273 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.273 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.273 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.273 LIB libspdk_event_vhost_blk.a 00:02:24.273 LIB libspdk_event_scheduler.a 00:02:24.273 LIB libspdk_event_keyring.a 00:02:24.273 LIB libspdk_event_sock.a 00:02:24.273 LIB libspdk_event_iobuf.a 00:02:24.273 LIB libspdk_event_vmd.a 00:02:24.273 SO libspdk_event_vhost_blk.so.3.0 00:02:24.273 SO libspdk_event_scheduler.so.4.0 00:02:24.273 SO libspdk_event_sock.so.5.0 00:02:24.273 SO libspdk_event_keyring.so.1.0 00:02:24.273 SO libspdk_event_iobuf.so.3.0 00:02:24.273 SO libspdk_event_vmd.so.6.0 00:02:24.273 SYMLINK libspdk_event_vhost_blk.so 00:02:24.273 SYMLINK libspdk_event_scheduler.so 00:02:24.273 SYMLINK libspdk_event_sock.so 00:02:24.273 SYMLINK libspdk_event_keyring.so 00:02:24.273 SYMLINK libspdk_event_iobuf.so 00:02:24.273 SYMLINK libspdk_event_vmd.so 00:02:24.535 CC module/event/subsystems/accel/accel.o 00:02:24.796 LIB libspdk_event_accel.a 00:02:24.796 SO libspdk_event_accel.so.6.0 00:02:25.056 SYMLINK libspdk_event_accel.so 00:02:25.318 CC module/event/subsystems/bdev/bdev.o 00:02:25.318 LIB libspdk_event_bdev.a 00:02:25.579 SO libspdk_event_bdev.so.6.0 00:02:25.579 SYMLINK libspdk_event_bdev.so 00:02:25.839 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:25.839 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:25.839 CC module/event/subsystems/nbd/nbd.o 00:02:25.839 CC module/event/subsystems/scsi/scsi.o 00:02:25.839 CC module/event/subsystems/ublk/ublk.o 00:02:26.100 LIB libspdk_event_scsi.a 00:02:26.100 LIB libspdk_event_nbd.a 00:02:26.100 SO libspdk_event_scsi.so.6.0 00:02:26.100 LIB libspdk_event_ublk.a 00:02:26.100 SO libspdk_event_nbd.so.6.0 00:02:26.100 LIB libspdk_event_nvmf.a 00:02:26.100 SO libspdk_event_ublk.so.3.0 00:02:26.100 SYMLINK libspdk_event_scsi.so 00:02:26.100 SO libspdk_event_nvmf.so.6.0 00:02:26.100 SYMLINK libspdk_event_nbd.so 00:02:26.100 SYMLINK libspdk_event_ublk.so 00:02:26.100 SYMLINK libspdk_event_nvmf.so 00:02:26.361 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.361 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.621 LIB libspdk_event_vhost_scsi.a 00:02:26.621 SO libspdk_event_vhost_scsi.so.3.0 00:02:26.621 LIB libspdk_event_iscsi.a 00:02:26.621 SO libspdk_event_iscsi.so.6.0 00:02:26.621 SYMLINK libspdk_event_vhost_scsi.so 00:02:26.882 SYMLINK libspdk_event_iscsi.so 00:02:26.882 SO libspdk.so.6.0 00:02:26.882 SYMLINK libspdk.so 00:02:27.456 CC app/spdk_nvme_identify/identify.o 00:02:27.456 CXX app/trace/trace.o 00:02:27.457 CC app/spdk_lspci/spdk_lspci.o 00:02:27.457 CC app/trace_record/trace_record.o 00:02:27.457 CC test/rpc_client/rpc_client_test.o 00:02:27.457 CC app/spdk_nvme_perf/perf.o 00:02:27.457 CC app/spdk_nvme_discover/discovery_aer.o 00:02:27.457 TEST_HEADER include/spdk/accel.h 00:02:27.457 TEST_HEADER include/spdk/accel_module.h 00:02:27.457 TEST_HEADER include/spdk/assert.h 00:02:27.457 TEST_HEADER include/spdk/barrier.h 00:02:27.457 TEST_HEADER include/spdk/bdev.h 00:02:27.457 TEST_HEADER include/spdk/bdev_zone.h 00:02:27.457 TEST_HEADER include/spdk/base64.h 00:02:27.457 CC app/spdk_dd/spdk_dd.o 00:02:27.457 TEST_HEADER include/spdk/bdev_module.h 00:02:27.457 CC app/nvmf_tgt/nvmf_main.o 00:02:27.457 CC app/iscsi_tgt/iscsi_tgt.o 00:02:27.457 TEST_HEADER include/spdk/bit_array.h 00:02:27.457 TEST_HEADER include/spdk/blob_bdev.h 00:02:27.457 CC app/spdk_tgt/spdk_tgt.o 00:02:27.457 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:27.457 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:27.457 TEST_HEADER include/spdk/blobfs.h 00:02:27.457 CC app/spdk_top/spdk_top.o 00:02:27.457 TEST_HEADER include/spdk/bit_pool.h 00:02:27.457 TEST_HEADER include/spdk/config.h 00:02:27.457 TEST_HEADER include/spdk/conf.h 00:02:27.457 TEST_HEADER include/spdk/crc32.h 00:02:27.457 TEST_HEADER include/spdk/crc64.h 00:02:27.457 TEST_HEADER include/spdk/dif.h 00:02:27.457 TEST_HEADER include/spdk/cpuset.h 00:02:27.457 TEST_HEADER include/spdk/crc16.h 00:02:27.457 TEST_HEADER include/spdk/dma.h 00:02:27.457 TEST_HEADER include/spdk/blob.h 00:02:27.457 TEST_HEADER include/spdk/env_dpdk.h 00:02:27.457 TEST_HEADER include/spdk/endian.h 00:02:27.457 TEST_HEADER include/spdk/fd_group.h 00:02:27.457 TEST_HEADER include/spdk/env.h 00:02:27.457 CC app/vhost/vhost.o 00:02:27.457 TEST_HEADER include/spdk/fd.h 00:02:27.457 TEST_HEADER include/spdk/ftl.h 00:02:27.457 TEST_HEADER include/spdk/gpt_spec.h 00:02:27.457 TEST_HEADER include/spdk/hexlify.h 00:02:27.457 TEST_HEADER include/spdk/event.h 00:02:27.457 TEST_HEADER include/spdk/idxd.h 00:02:27.457 TEST_HEADER include/spdk/histogram_data.h 00:02:27.457 TEST_HEADER include/spdk/init.h 00:02:27.457 TEST_HEADER include/spdk/file.h 00:02:27.457 TEST_HEADER include/spdk/ioat.h 00:02:27.457 TEST_HEADER include/spdk/ioat_spec.h 00:02:27.457 TEST_HEADER include/spdk/iscsi_spec.h 00:02:27.457 TEST_HEADER include/spdk/json.h 00:02:27.457 TEST_HEADER include/spdk/idxd_spec.h 00:02:27.457 TEST_HEADER include/spdk/keyring.h 00:02:27.457 TEST_HEADER include/spdk/likely.h 00:02:27.457 TEST_HEADER include/spdk/log.h 00:02:27.457 TEST_HEADER include/spdk/jsonrpc.h 00:02:27.457 TEST_HEADER include/spdk/lvol.h 00:02:27.457 TEST_HEADER include/spdk/memory.h 00:02:27.457 TEST_HEADER include/spdk/nbd.h 00:02:27.457 TEST_HEADER include/spdk/nvme.h 00:02:27.457 TEST_HEADER include/spdk/nvme_intel.h 00:02:27.457 TEST_HEADER include/spdk/notify.h 00:02:27.457 TEST_HEADER include/spdk/nvme_spec.h 00:02:27.457 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:27.457 TEST_HEADER include/spdk/nvme_zns.h 00:02:27.457 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:27.457 TEST_HEADER include/spdk/nvmf_spec.h 00:02:27.457 TEST_HEADER include/spdk/nvmf.h 00:02:27.457 TEST_HEADER include/spdk/keyring_module.h 00:02:27.457 TEST_HEADER include/spdk/opal.h 00:02:27.457 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:27.457 TEST_HEADER include/spdk/nvmf_transport.h 00:02:27.457 TEST_HEADER include/spdk/mmio.h 00:02:27.457 TEST_HEADER include/spdk/pipe.h 00:02:27.457 TEST_HEADER include/spdk/opal_spec.h 00:02:27.457 TEST_HEADER include/spdk/pci_ids.h 00:02:27.457 TEST_HEADER include/spdk/queue.h 00:02:27.457 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:27.457 TEST_HEADER include/spdk/reduce.h 00:02:27.457 TEST_HEADER include/spdk/rpc.h 00:02:27.457 TEST_HEADER include/spdk/scheduler.h 00:02:27.457 TEST_HEADER include/spdk/trace.h 00:02:27.457 TEST_HEADER include/spdk/thread.h 00:02:27.457 TEST_HEADER include/spdk/scsi.h 00:02:27.457 TEST_HEADER include/spdk/string.h 00:02:27.457 TEST_HEADER include/spdk/trace_parser.h 00:02:27.457 TEST_HEADER include/spdk/util.h 00:02:27.457 TEST_HEADER include/spdk/ublk.h 00:02:27.457 TEST_HEADER include/spdk/version.h 00:02:27.457 TEST_HEADER include/spdk/uuid.h 00:02:27.457 TEST_HEADER include/spdk/sock.h 00:02:27.457 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:27.457 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:27.457 TEST_HEADER include/spdk/tree.h 00:02:27.457 TEST_HEADER include/spdk/scsi_spec.h 00:02:27.457 TEST_HEADER include/spdk/vmd.h 00:02:27.457 TEST_HEADER include/spdk/stdinc.h 00:02:27.457 TEST_HEADER include/spdk/xor.h 00:02:27.457 CXX test/cpp_headers/accel_module.o 00:02:27.457 CXX test/cpp_headers/accel.o 00:02:27.457 CXX test/cpp_headers/assert.o 00:02:27.457 CXX test/cpp_headers/base64.o 00:02:27.457 CXX test/cpp_headers/bdev.o 00:02:27.457 CXX test/cpp_headers/barrier.o 00:02:27.457 TEST_HEADER include/spdk/vhost.h 00:02:27.457 CXX test/cpp_headers/bit_pool.o 00:02:27.457 CXX test/cpp_headers/bit_array.o 00:02:27.457 CXX test/cpp_headers/blob_bdev.o 00:02:27.457 TEST_HEADER include/spdk/zipf.h 00:02:27.457 CXX test/cpp_headers/bdev_zone.o 00:02:27.457 CXX test/cpp_headers/blobfs.o 00:02:27.457 CXX test/cpp_headers/bdev_module.o 00:02:27.457 CXX test/cpp_headers/blob.o 00:02:27.457 CC app/fio/nvme/fio_plugin.o 00:02:27.457 CXX test/cpp_headers/blobfs_bdev.o 00:02:27.457 CXX test/cpp_headers/cpuset.o 00:02:27.457 CXX test/cpp_headers/crc16.o 00:02:27.457 CXX test/cpp_headers/crc32.o 00:02:27.457 CXX test/cpp_headers/crc64.o 00:02:27.457 CXX test/cpp_headers/config.o 00:02:27.457 CXX test/cpp_headers/conf.o 00:02:27.457 CXX test/cpp_headers/dma.o 00:02:27.457 CC examples/accel/perf/accel_perf.o 00:02:27.457 CXX test/cpp_headers/dif.o 00:02:27.457 CXX test/cpp_headers/event.o 00:02:27.457 CXX test/cpp_headers/env_dpdk.o 00:02:27.457 CXX test/cpp_headers/fd_group.o 00:02:27.457 CXX test/cpp_headers/endian.o 00:02:27.457 CC test/event/reactor_perf/reactor_perf.o 00:02:27.457 CXX test/cpp_headers/env.o 00:02:27.457 CXX test/cpp_headers/ftl.o 00:02:27.457 CC test/event/event_perf/event_perf.o 00:02:27.457 CXX test/cpp_headers/fd.o 00:02:27.457 CXX test/cpp_headers/file.o 00:02:27.747 CXX test/cpp_headers/idxd.o 00:02:27.747 CXX test/cpp_headers/idxd_spec.o 00:02:27.747 CXX test/cpp_headers/init.o 00:02:27.747 CXX test/cpp_headers/gpt_spec.o 00:02:27.747 CXX test/cpp_headers/ioat.o 00:02:27.747 CXX test/cpp_headers/hexlify.o 00:02:27.747 CC test/thread/poller_perf/poller_perf.o 00:02:27.747 CXX test/cpp_headers/ioat_spec.o 00:02:27.747 CC examples/ioat/verify/verify.o 00:02:27.747 CXX test/cpp_headers/histogram_data.o 00:02:27.747 CXX test/cpp_headers/json.o 00:02:27.747 CXX test/cpp_headers/keyring.o 00:02:27.747 CC examples/nvme/hello_world/hello_world.o 00:02:27.747 CXX test/cpp_headers/iscsi_spec.o 00:02:27.747 CC test/nvme/err_injection/err_injection.o 00:02:27.747 CXX test/cpp_headers/likely.o 00:02:27.747 CC test/app/histogram_perf/histogram_perf.o 00:02:27.747 CXX test/cpp_headers/jsonrpc.o 00:02:27.747 CXX test/cpp_headers/lvol.o 00:02:27.747 CXX test/cpp_headers/mmio.o 00:02:27.747 CC test/env/memory/memory_ut.o 00:02:27.747 CXX test/cpp_headers/nbd.o 00:02:27.747 CXX test/cpp_headers/notify.o 00:02:27.747 CXX test/cpp_headers/keyring_module.o 00:02:27.747 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.747 CXX test/cpp_headers/log.o 00:02:27.747 CXX test/cpp_headers/nvme_spec.o 00:02:27.747 CXX test/cpp_headers/nvme_zns.o 00:02:27.747 CXX test/cpp_headers/memory.o 00:02:27.747 CXX test/cpp_headers/nvme.o 00:02:27.747 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.747 CXX test/cpp_headers/nvme_intel.o 00:02:27.747 CC test/nvme/connect_stress/connect_stress.o 00:02:27.747 CC test/nvme/reserve/reserve.o 00:02:27.747 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.747 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.747 CC test/nvme/reset/reset.o 00:02:27.747 CXX test/cpp_headers/nvmf_spec.o 00:02:27.747 CXX test/cpp_headers/pci_ids.o 00:02:27.747 CXX test/cpp_headers/nvmf.o 00:02:27.747 CXX test/cpp_headers/pipe.o 00:02:27.747 CXX test/cpp_headers/opal_spec.o 00:02:27.747 CXX test/cpp_headers/queue.o 00:02:27.747 CXX test/cpp_headers/reduce.o 00:02:27.747 CXX test/cpp_headers/rpc.o 00:02:27.747 CXX test/cpp_headers/nvmf_transport.o 00:02:27.747 CC examples/vmd/led/led.o 00:02:27.747 CC test/event/scheduler/scheduler.o 00:02:27.747 CC test/event/reactor/reactor.o 00:02:27.747 CXX test/cpp_headers/scheduler.o 00:02:27.747 CXX test/cpp_headers/opal.o 00:02:27.747 CC test/app/jsoncat/jsoncat.o 00:02:27.747 CC test/nvme/overhead/overhead.o 00:02:27.747 CC examples/nvme/arbitration/arbitration.o 00:02:27.747 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:27.747 CC test/env/pci/pci_ut.o 00:02:27.747 CC examples/nvme/reconnect/reconnect.o 00:02:27.747 CC test/nvme/startup/startup.o 00:02:27.747 CC test/blobfs/mkfs/mkfs.o 00:02:27.747 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.747 LINK nvmf_tgt 00:02:27.747 LINK spdk_trace_record 00:02:27.747 CC test/nvme/fused_ordering/fused_ordering.o 00:02:27.747 CC test/nvme/aer/aer.o 00:02:27.747 LINK spdk_nvme_discover 00:02:27.747 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:27.747 CC examples/nvme/hotplug/hotplug.o 00:02:27.747 CC test/event/app_repeat/app_repeat.o 00:02:27.747 CC examples/ioat/perf/perf.o 00:02:27.747 CC test/app/stub/stub.o 00:02:27.747 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:27.747 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:27.747 CC examples/nvmf/nvmf/nvmf.o 00:02:27.747 CC test/env/vtophys/vtophys.o 00:02:27.747 CC test/accel/dif/dif.o 00:02:27.747 LINK vhost 00:02:27.747 CC test/nvme/boot_partition/boot_partition.o 00:02:27.747 CC test/nvme/fdp/fdp.o 00:02:27.747 CC test/nvme/simple_copy/simple_copy.o 00:02:27.747 CC examples/bdev/bdevperf/bdevperf.o 00:02:28.019 CC examples/blob/hello_world/hello_blob.o 00:02:28.019 CC test/app/bdev_svc/bdev_svc.o 00:02:28.019 CC examples/sock/hello_world/hello_sock.o 00:02:28.019 CC examples/idxd/perf/perf.o 00:02:28.019 CC test/nvme/e2edp/nvme_dp.o 00:02:28.019 CC examples/vmd/lsvmd/lsvmd.o 00:02:28.019 CXX test/cpp_headers/scsi.o 00:02:28.019 CC test/nvme/cuse/cuse.o 00:02:28.019 CC examples/util/zipf/zipf.o 00:02:28.019 CC test/bdev/bdevio/bdevio.o 00:02:28.019 CC app/fio/bdev/fio_plugin.o 00:02:28.019 CC examples/nvme/abort/abort.o 00:02:28.019 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:28.019 CXX test/cpp_headers/scsi_spec.o 00:02:28.019 LINK event_perf 00:02:28.019 CC test/dma/test_dma/test_dma.o 00:02:28.019 LINK iscsi_tgt 00:02:28.019 CXX test/cpp_headers/sock.o 00:02:28.019 LINK spdk_trace 00:02:28.019 CC examples/blob/cli/blobcli.o 00:02:28.019 CXX test/cpp_headers/stdinc.o 00:02:28.019 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:28.019 CXX test/cpp_headers/string.o 00:02:28.019 CXX test/cpp_headers/thread.o 00:02:28.019 CC test/nvme/compliance/nvme_compliance.o 00:02:28.019 CXX test/cpp_headers/trace.o 00:02:28.019 CC test/nvme/sgl/sgl.o 00:02:28.019 LINK spdk_lspci 00:02:28.019 LINK spdk_dd 00:02:28.019 CXX test/cpp_headers/tree.o 00:02:28.019 CXX test/cpp_headers/trace_parser.o 00:02:28.019 CXX test/cpp_headers/ublk.o 00:02:28.019 LINK hello_world 00:02:28.019 CXX test/cpp_headers/util.o 00:02:28.019 LINK connect_stress 00:02:28.019 CXX test/cpp_headers/uuid.o 00:02:28.019 LINK reserve 00:02:28.019 CXX test/cpp_headers/version.o 00:02:28.019 CXX test/cpp_headers/vfio_user_spec.o 00:02:28.019 CXX test/cpp_headers/vfio_user_pci.o 00:02:28.019 CXX test/cpp_headers/vhost.o 00:02:28.019 CC test/env/mem_callbacks/mem_callbacks.o 00:02:28.019 CXX test/cpp_headers/vmd.o 00:02:28.281 CXX test/cpp_headers/xor.o 00:02:28.281 LINK poller_perf 00:02:28.281 CXX test/cpp_headers/zipf.o 00:02:28.281 CC examples/thread/thread/thread_ex.o 00:02:28.281 LINK app_repeat 00:02:28.281 LINK startup 00:02:28.281 LINK verify 00:02:28.281 LINK stub 00:02:28.281 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:28.281 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:28.281 LINK pmr_persistence 00:02:28.281 LINK scheduler 00:02:28.281 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:28.281 LINK overhead 00:02:28.281 LINK reset 00:02:28.281 CC test/lvol/esnap/esnap.o 00:02:28.281 LINK rpc_client_test 00:02:28.281 LINK fdp 00:02:28.281 LINK nvme_dp 00:02:28.281 LINK nvmf 00:02:28.540 LINK spdk_tgt 00:02:28.540 LINK spdk_nvme 00:02:28.540 LINK interrupt_tgt 00:02:28.540 LINK spdk_nvme_perf 00:02:28.541 LINK idxd_perf 00:02:28.541 LINK vtophys 00:02:28.541 LINK nvme_compliance 00:02:28.541 LINK sgl 00:02:28.541 LINK test_dma 00:02:28.541 LINK dif 00:02:28.541 LINK abort 00:02:28.541 LINK reactor_perf 00:02:28.541 LINK env_dpdk_post_init 00:02:28.541 LINK spdk_nvme_identify 00:02:28.541 LINK bdevio 00:02:28.541 LINK cmb_copy 00:02:28.541 LINK histogram_perf 00:02:28.541 LINK hello_bdev 00:02:28.541 LINK thread 00:02:28.541 LINK reactor 00:02:28.541 LINK simple_copy 00:02:28.541 LINK boot_partition 00:02:28.541 LINK jsoncat 00:02:28.541 LINK zipf 00:02:28.541 LINK lsvmd 00:02:28.541 LINK spdk_bdev 00:02:28.541 LINK led 00:02:28.541 LINK mkfs 00:02:28.801 LINK hello_blob 00:02:28.801 LINK err_injection 00:02:28.801 LINK ioat_perf 00:02:28.801 LINK doorbell_aers 00:02:28.801 LINK hello_sock 00:02:28.801 LINK bdev_svc 00:02:28.801 LINK vhost_fuzz 00:02:28.801 LINK fused_ordering 00:02:28.801 LINK nvme_fuzz 00:02:28.801 LINK hotplug 00:02:28.801 LINK nvme_manage 00:02:28.801 LINK aer 00:02:28.801 LINK blobcli 00:02:28.801 LINK accel_perf 00:02:28.801 LINK arbitration 00:02:29.063 LINK reconnect 00:02:29.063 LINK pci_ut 00:02:29.063 LINK memory_ut 00:02:29.324 LINK mem_callbacks 00:02:29.324 LINK bdevperf 00:02:29.324 LINK spdk_top 00:02:29.324 LINK cuse 00:02:29.585 LINK iscsi_fuzz 00:02:32.130 LINK esnap 00:02:32.391 00:02:32.391 real 0m49.316s 00:02:32.391 user 6m29.516s 00:02:32.391 sys 4m33.384s 00:02:32.391 00:27:50 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:32.391 00:27:50 make -- common/autotest_common.sh@10 -- $ set +x 00:02:32.391 ************************************ 00:02:32.391 END TEST make 00:02:32.391 ************************************ 00:02:32.652 00:27:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:32.652 00:27:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:32.652 00:27:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:32.652 00:27:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.652 00:27:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:32.652 00:27:50 -- pm/common@44 -- $ pid=64326 00:02:32.652 00:27:50 -- pm/common@50 -- $ kill -TERM 64326 00:02:32.652 00:27:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.652 00:27:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:32.652 00:27:50 -- pm/common@44 -- $ pid=64327 00:02:32.652 00:27:50 -- pm/common@50 -- $ kill -TERM 64327 00:02:32.652 00:27:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.652 00:27:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:32.652 00:27:50 -- pm/common@44 -- $ pid=64328 00:02:32.652 00:27:50 -- pm/common@50 -- $ kill -TERM 64328 00:02:32.652 00:27:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.652 00:27:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:32.652 00:27:50 -- pm/common@44 -- $ pid=64351 00:02:32.652 00:27:50 -- pm/common@50 -- $ sudo -E kill -TERM 64351 00:02:32.652 00:27:50 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:32.652 00:27:50 -- nvmf/common.sh@7 -- # uname -s 00:02:32.652 00:27:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:32.652 00:27:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:32.652 00:27:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:32.652 00:27:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:32.652 00:27:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:32.652 00:27:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:32.652 00:27:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:32.652 00:27:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:32.652 00:27:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:32.652 00:27:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:32.652 00:27:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:32.652 00:27:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:32.652 00:27:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:32.652 00:27:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:32.652 00:27:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:32.652 00:27:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:32.652 00:27:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:32.652 00:27:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:32.652 00:27:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.652 00:27:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.652 00:27:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.652 00:27:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.652 00:27:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.652 00:27:50 -- paths/export.sh@5 -- # export PATH 00:02:32.652 00:27:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.652 00:27:50 -- nvmf/common.sh@47 -- # : 0 00:02:32.652 00:27:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:32.652 00:27:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:32.652 00:27:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:32.652 00:27:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:32.652 00:27:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:32.652 00:27:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:32.652 00:27:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:32.652 00:27:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:32.652 00:27:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:32.653 00:27:50 -- spdk/autotest.sh@32 -- # uname -s 00:02:32.653 00:27:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:32.653 00:27:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:32.653 00:27:50 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.653 00:27:50 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:32.653 00:27:50 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.653 00:27:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:32.653 00:27:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:32.653 00:27:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:32.653 00:27:50 -- spdk/autotest.sh@48 -- # udevadm_pid=127099 00:02:32.653 00:27:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:32.653 00:27:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:32.653 00:27:50 -- pm/common@17 -- # local monitor 00:02:32.653 00:27:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.653 00:27:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.653 00:27:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.653 00:27:50 -- pm/common@21 -- # date +%s 00:02:32.653 00:27:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.653 00:27:50 -- pm/common@21 -- # date +%s 00:02:32.653 00:27:50 -- pm/common@25 -- # sleep 1 00:02:32.653 00:27:50 -- pm/common@21 -- # date +%s 00:02:32.653 00:27:50 -- pm/common@21 -- # date +%s 00:02:32.653 00:27:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717799270 00:02:32.653 00:27:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717799270 00:02:32.653 00:27:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717799270 00:02:32.653 00:27:50 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717799270 00:02:32.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717799270_collect-vmstat.pm.log 00:02:32.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717799270_collect-cpu-load.pm.log 00:02:32.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717799270_collect-cpu-temp.pm.log 00:02:32.913 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717799270_collect-bmc-pm.bmc.pm.log 00:02:33.856 00:27:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:33.856 00:27:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:33.856 00:27:51 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:33.856 00:27:51 -- common/autotest_common.sh@10 -- # set +x 00:02:33.856 00:27:51 -- spdk/autotest.sh@59 -- # create_test_list 00:02:33.856 00:27:51 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:33.856 00:27:51 -- common/autotest_common.sh@10 -- # set +x 00:02:33.856 00:27:51 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:33.856 00:27:51 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.856 00:27:51 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.856 00:27:51 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:33.856 00:27:51 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.856 00:27:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:33.856 00:27:51 -- common/autotest_common.sh@1454 -- # uname 00:02:33.856 00:27:51 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:33.856 00:27:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:33.856 00:27:51 -- common/autotest_common.sh@1474 -- # uname 00:02:33.856 00:27:51 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:33.856 00:27:51 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:33.856 00:27:51 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:33.856 00:27:51 -- spdk/autotest.sh@72 -- # hash lcov 00:02:33.856 00:27:51 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:33.856 00:27:51 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:33.856 --rc lcov_branch_coverage=1 00:02:33.856 --rc lcov_function_coverage=1 00:02:33.856 --rc genhtml_branch_coverage=1 00:02:33.856 --rc genhtml_function_coverage=1 00:02:33.856 --rc genhtml_legend=1 00:02:33.856 --rc geninfo_all_blocks=1 00:02:33.856 ' 00:02:33.856 00:27:51 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:33.856 --rc lcov_branch_coverage=1 00:02:33.856 --rc lcov_function_coverage=1 00:02:33.856 --rc genhtml_branch_coverage=1 00:02:33.856 --rc genhtml_function_coverage=1 00:02:33.856 --rc genhtml_legend=1 00:02:33.856 --rc geninfo_all_blocks=1 00:02:33.856 ' 00:02:33.856 00:27:51 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:33.856 --rc lcov_branch_coverage=1 00:02:33.856 --rc lcov_function_coverage=1 00:02:33.856 --rc genhtml_branch_coverage=1 00:02:33.856 --rc genhtml_function_coverage=1 00:02:33.856 --rc genhtml_legend=1 00:02:33.856 --rc geninfo_all_blocks=1 00:02:33.856 --no-external' 00:02:33.856 00:27:51 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:33.856 --rc lcov_branch_coverage=1 00:02:33.856 --rc lcov_function_coverage=1 00:02:33.856 --rc genhtml_branch_coverage=1 00:02:33.856 --rc genhtml_function_coverage=1 00:02:33.856 --rc genhtml_legend=1 00:02:33.856 --rc geninfo_all_blocks=1 00:02:33.856 --no-external' 00:02:33.856 00:27:51 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:33.856 lcov: LCOV version 1.14 00:02:33.856 00:27:52 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:46.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:46.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:01.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:01.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:01.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:01.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:01.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:01.947 00:28:19 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:01.947 00:28:19 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:01.947 00:28:19 -- common/autotest_common.sh@10 -- # set +x 00:03:01.947 00:28:19 -- spdk/autotest.sh@91 -- # rm -f 00:03:01.947 00:28:19 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.253 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:05.253 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:05.253 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:05.513 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:05.513 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:05.513 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:05.773 00:28:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:05.773 00:28:23 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:05.773 00:28:23 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:05.773 00:28:23 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:05.773 00:28:23 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:05.773 00:28:23 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:05.773 00:28:23 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:05.773 00:28:23 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.773 00:28:23 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:05.773 00:28:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:05.773 00:28:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:05.773 00:28:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:05.773 00:28:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:05.773 00:28:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:05.773 00:28:23 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:05.773 No valid GPT data, bailing 00:03:05.773 00:28:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:05.773 00:28:23 -- scripts/common.sh@391 -- # pt= 00:03:05.773 00:28:23 -- scripts/common.sh@392 -- # return 1 00:03:05.773 00:28:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:05.773 1+0 records in 00:03:05.773 1+0 records out 00:03:05.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00366318 s, 286 MB/s 00:03:05.773 00:28:23 -- spdk/autotest.sh@118 -- # sync 00:03:05.773 00:28:23 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:05.773 00:28:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:05.773 00:28:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:13.919 00:28:32 -- spdk/autotest.sh@124 -- # uname -s 00:03:13.919 00:28:32 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:13.919 00:28:32 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.919 00:28:32 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:13.919 00:28:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:13.919 00:28:32 -- common/autotest_common.sh@10 -- # set +x 00:03:13.919 ************************************ 00:03:13.919 START TEST setup.sh 00:03:13.919 ************************************ 00:03:13.919 00:28:32 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.919 * Looking for test storage... 00:03:13.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.919 00:28:32 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:13.919 00:28:32 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:13.919 00:28:32 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:13.919 00:28:32 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:13.919 00:28:32 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:13.919 00:28:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:13.919 ************************************ 00:03:13.919 START TEST acl 00:03:13.919 ************************************ 00:03:13.919 00:28:32 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:14.181 * Looking for test storage... 00:03:14.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:14.181 00:28:32 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:14.181 00:28:32 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:14.181 00:28:32 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:14.181 00:28:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:14.181 00:28:32 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:14.181 00:28:32 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:14.181 00:28:32 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:14.181 00:28:32 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.181 00:28:32 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:14.181 00:28:32 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:14.181 00:28:32 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:14.181 00:28:32 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:14.181 00:28:32 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:14.181 00:28:32 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:14.181 00:28:32 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.181 00:28:32 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.482 00:28:35 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:17.482 00:28:35 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:17.482 00:28:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.482 00:28:35 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:17.482 00:28:35 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.482 00:28:35 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:20.786 Hugepages 00:03:20.786 node hugesize free / total 00:03:20.786 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:20.786 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:20.786 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.786 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:20.786 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:20.786 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:03:21.048 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.048 00:28:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.049 00:28:39 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:21.049 00:28:39 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:21.049 00:28:39 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:21.049 00:28:39 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:21.049 00:28:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:21.049 ************************************ 00:03:21.049 START TEST denied 00:03:21.049 ************************************ 00:03:21.049 00:28:39 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:03:21.310 00:28:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:21.310 00:28:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:21.310 00:28:39 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:21.310 00:28:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.310 00:28:39 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:25.577 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:25.577 00:28:43 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:25.577 00:28:43 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:25.577 00:28:43 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:25.577 00:28:43 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:25.577 00:28:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:25.577 00:28:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:25.577 00:28:43 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:25.577 00:28:43 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:25.577 00:28:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.577 00:28:43 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.784 00:03:29.784 real 0m8.675s 00:03:29.784 user 0m2.848s 00:03:29.784 sys 0m5.121s 00:03:29.784 00:28:48 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:29.784 00:28:48 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:29.784 ************************************ 00:03:29.784 END TEST denied 00:03:29.784 ************************************ 00:03:29.784 00:28:48 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:29.784 00:28:48 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:29.784 00:28:48 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:29.784 00:28:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:30.045 ************************************ 00:03:30.045 START TEST allowed 00:03:30.045 ************************************ 00:03:30.045 00:28:48 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:30.045 00:28:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:30.045 00:28:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:30.045 00:28:48 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:30.045 00:28:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.045 00:28:48 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.337 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:35.337 00:28:53 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:35.337 00:28:53 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:35.337 00:28:53 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:35.337 00:28:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.337 00:28:53 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.552 00:03:39.552 real 0m8.992s 00:03:39.552 user 0m2.573s 00:03:39.552 sys 0m4.647s 00:03:39.552 00:28:57 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:39.552 00:28:57 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:39.552 ************************************ 00:03:39.552 END TEST allowed 00:03:39.552 ************************************ 00:03:39.552 00:03:39.552 real 0m24.910s 00:03:39.552 user 0m7.879s 00:03:39.552 sys 0m14.613s 00:03:39.552 00:28:57 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:39.552 00:28:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.552 ************************************ 00:03:39.552 END TEST acl 00:03:39.552 ************************************ 00:03:39.552 00:28:57 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:39.552 00:28:57 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:39.552 00:28:57 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:39.552 00:28:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.552 ************************************ 00:03:39.552 START TEST hugepages 00:03:39.552 ************************************ 00:03:39.552 00:28:57 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:39.552 * Looking for test storage... 00:03:39.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 103295900 kB' 'MemAvailable: 106555268 kB' 'Buffers: 2704 kB' 'Cached: 14297476 kB' 'SwapCached: 0 kB' 'Active: 11340032 kB' 'Inactive: 3514596 kB' 'Active(anon): 10928132 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557944 kB' 'Mapped: 168780 kB' 'Shmem: 10373684 kB' 'KReclaimable: 314404 kB' 'Slab: 1159544 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 845140 kB' 'KernelStack: 27248 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 12388928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235304 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.552 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.553 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:39.554 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:39.554 00:28:57 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:39.554 00:28:57 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:39.554 00:28:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.554 ************************************ 00:03:39.554 START TEST default_setup 00:03:39.554 ************************************ 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:39.554 00:28:57 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.555 00:28:57 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.857 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:42.857 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105503144 kB' 'MemAvailable: 108762512 kB' 'Buffers: 2704 kB' 'Cached: 14297600 kB' 'SwapCached: 0 kB' 'Active: 11347056 kB' 'Inactive: 3514596 kB' 'Active(anon): 10935156 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564712 kB' 'Mapped: 168584 kB' 'Shmem: 10373808 kB' 'KReclaimable: 314404 kB' 'Slab: 1157276 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842872 kB' 'KernelStack: 27200 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12395592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235268 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.122 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.123 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105503400 kB' 'MemAvailable: 108762768 kB' 'Buffers: 2704 kB' 'Cached: 14297600 kB' 'SwapCached: 0 kB' 'Active: 11346740 kB' 'Inactive: 3514596 kB' 'Active(anon): 10934840 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564464 kB' 'Mapped: 168520 kB' 'Shmem: 10373808 kB' 'KReclaimable: 314404 kB' 'Slab: 1157340 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842936 kB' 'KernelStack: 27248 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12395608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235252 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.124 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.125 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105503152 kB' 'MemAvailable: 108762520 kB' 'Buffers: 2704 kB' 'Cached: 14297620 kB' 'SwapCached: 0 kB' 'Active: 11346760 kB' 'Inactive: 3514596 kB' 'Active(anon): 10934860 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564472 kB' 'Mapped: 168520 kB' 'Shmem: 10373828 kB' 'KReclaimable: 314404 kB' 'Slab: 1157340 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842936 kB' 'KernelStack: 27248 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12395632 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235252 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.126 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.127 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.128 nr_hugepages=1024 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.128 resv_hugepages=0 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.128 surplus_hugepages=0 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.128 anon_hugepages=0 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105501892 kB' 'MemAvailable: 108761260 kB' 'Buffers: 2704 kB' 'Cached: 14297620 kB' 'SwapCached: 0 kB' 'Active: 11346760 kB' 'Inactive: 3514596 kB' 'Active(anon): 10934860 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564488 kB' 'Mapped: 168520 kB' 'Shmem: 10373828 kB' 'KReclaimable: 314404 kB' 'Slab: 1157340 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842936 kB' 'KernelStack: 27248 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12395652 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235252 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.128 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.129 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50531552 kB' 'MemUsed: 15127456 kB' 'SwapCached: 0 kB' 'Active: 7127492 kB' 'Inactive: 3323512 kB' 'Active(anon): 6978252 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10206756 kB' 'Mapped: 58588 kB' 'AnonPages: 247420 kB' 'Shmem: 6734004 kB' 'KernelStack: 12344 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182612 kB' 'Slab: 687908 kB' 'SReclaimable: 182612 kB' 'SUnreclaim: 505296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.130 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.131 node0=1024 expecting 1024 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.131 00:03:43.131 real 0m3.922s 00:03:43.131 user 0m1.530s 00:03:43.131 sys 0m2.409s 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:43.131 00:29:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:43.131 ************************************ 00:03:43.131 END TEST default_setup 00:03:43.131 ************************************ 00:03:43.131 00:29:01 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:43.131 00:29:01 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:43.131 00:29:01 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:43.131 00:29:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.131 ************************************ 00:03:43.131 START TEST per_node_1G_alloc 00:03:43.131 ************************************ 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.131 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.132 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.431 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:46.431 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.431 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.698 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105508996 kB' 'MemAvailable: 108768364 kB' 'Buffers: 2704 kB' 'Cached: 14297752 kB' 'SwapCached: 0 kB' 'Active: 11348448 kB' 'Inactive: 3514596 kB' 'Active(anon): 10936548 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565408 kB' 'Mapped: 168768 kB' 'Shmem: 10373960 kB' 'KReclaimable: 314404 kB' 'Slab: 1157352 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842948 kB' 'KernelStack: 27184 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12396680 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.699 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105509644 kB' 'MemAvailable: 108769012 kB' 'Buffers: 2704 kB' 'Cached: 14297752 kB' 'SwapCached: 0 kB' 'Active: 11348100 kB' 'Inactive: 3514596 kB' 'Active(anon): 10936200 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565072 kB' 'Mapped: 168768 kB' 'Shmem: 10373960 kB' 'KReclaimable: 314404 kB' 'Slab: 1157352 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842948 kB' 'KernelStack: 27168 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12396696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.700 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.701 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105509460 kB' 'MemAvailable: 108768828 kB' 'Buffers: 2704 kB' 'Cached: 14297772 kB' 'SwapCached: 0 kB' 'Active: 11347424 kB' 'Inactive: 3514596 kB' 'Active(anon): 10935524 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564776 kB' 'Mapped: 168540 kB' 'Shmem: 10373980 kB' 'KReclaimable: 314404 kB' 'Slab: 1157332 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842928 kB' 'KernelStack: 27184 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12396720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.702 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.703 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.704 nr_hugepages=1024 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.704 resv_hugepages=0 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.704 surplus_hugepages=0 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.704 anon_hugepages=0 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105521632 kB' 'MemAvailable: 108781000 kB' 'Buffers: 2704 kB' 'Cached: 14297812 kB' 'SwapCached: 0 kB' 'Active: 11345260 kB' 'Inactive: 3514596 kB' 'Active(anon): 10933360 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562628 kB' 'Mapped: 167340 kB' 'Shmem: 10374020 kB' 'KReclaimable: 314404 kB' 'Slab: 1157268 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842864 kB' 'KernelStack: 27136 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12385716 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235396 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.704 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.705 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51591012 kB' 'MemUsed: 14067996 kB' 'SwapCached: 0 kB' 'Active: 7126612 kB' 'Inactive: 3323512 kB' 'Active(anon): 6977372 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10206832 kB' 'Mapped: 58316 kB' 'AnonPages: 246428 kB' 'Shmem: 6734080 kB' 'KernelStack: 12296 kB' 'PageTables: 3532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182612 kB' 'Slab: 688020 kB' 'SReclaimable: 182612 kB' 'SUnreclaim: 505408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.706 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.707 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53933260 kB' 'MemUsed: 6746580 kB' 'SwapCached: 0 kB' 'Active: 4222772 kB' 'Inactive: 191084 kB' 'Active(anon): 3960112 kB' 'Inactive(anon): 0 kB' 'Active(file): 262660 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4093688 kB' 'Mapped: 109528 kB' 'AnonPages: 320312 kB' 'Shmem: 3639944 kB' 'KernelStack: 14856 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131792 kB' 'Slab: 469248 kB' 'SReclaimable: 131792 kB' 'SUnreclaim: 337456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.709 node0=512 expecting 512 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:46.709 node1=512 expecting 512 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:46.709 00:03:46.709 real 0m3.557s 00:03:46.709 user 0m1.258s 00:03:46.709 sys 0m2.338s 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:46.709 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.709 ************************************ 00:03:46.709 END TEST per_node_1G_alloc 00:03:46.709 ************************************ 00:03:46.709 00:29:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:46.709 00:29:04 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:46.709 00:29:04 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:46.709 00:29:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.971 ************************************ 00:03:46.971 START TEST even_2G_alloc 00:03:46.971 ************************************ 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.971 00:29:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.573 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:49.573 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.573 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105549364 kB' 'MemAvailable: 108808732 kB' 'Buffers: 2704 kB' 'Cached: 14297940 kB' 'SwapCached: 0 kB' 'Active: 11347136 kB' 'Inactive: 3514596 kB' 'Active(anon): 10935236 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563932 kB' 'Mapped: 167448 kB' 'Shmem: 10374148 kB' 'KReclaimable: 314404 kB' 'Slab: 1157000 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842596 kB' 'KernelStack: 27232 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12386484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.835 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105550428 kB' 'MemAvailable: 108809796 kB' 'Buffers: 2704 kB' 'Cached: 14297940 kB' 'SwapCached: 0 kB' 'Active: 11347076 kB' 'Inactive: 3514596 kB' 'Active(anon): 10935176 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563868 kB' 'Mapped: 167436 kB' 'Shmem: 10374148 kB' 'KReclaimable: 314404 kB' 'Slab: 1156988 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842584 kB' 'KernelStack: 27200 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12386500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.837 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.838 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105550028 kB' 'MemAvailable: 108809396 kB' 'Buffers: 2704 kB' 'Cached: 14297960 kB' 'SwapCached: 0 kB' 'Active: 11346260 kB' 'Inactive: 3514596 kB' 'Active(anon): 10934360 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563376 kB' 'Mapped: 167360 kB' 'Shmem: 10374168 kB' 'KReclaimable: 314404 kB' 'Slab: 1156956 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842552 kB' 'KernelStack: 27168 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12387540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.104 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.105 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.106 nr_hugepages=1024 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.106 resv_hugepages=0 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.106 surplus_hugepages=0 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.106 anon_hugepages=0 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105548768 kB' 'MemAvailable: 108808136 kB' 'Buffers: 2704 kB' 'Cached: 14297960 kB' 'SwapCached: 0 kB' 'Active: 11347152 kB' 'Inactive: 3514596 kB' 'Active(anon): 10935252 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564344 kB' 'Mapped: 167864 kB' 'Shmem: 10374168 kB' 'KReclaimable: 314404 kB' 'Slab: 1156956 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842552 kB' 'KernelStack: 27152 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12388560 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.106 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.107 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51619600 kB' 'MemUsed: 14039408 kB' 'SwapCached: 0 kB' 'Active: 7126888 kB' 'Inactive: 3323512 kB' 'Active(anon): 6977648 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10206964 kB' 'Mapped: 58336 kB' 'AnonPages: 246584 kB' 'Shmem: 6734212 kB' 'KernelStack: 12280 kB' 'PageTables: 3492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182612 kB' 'Slab: 687904 kB' 'SReclaimable: 182612 kB' 'SUnreclaim: 505292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.108 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.109 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53924404 kB' 'MemUsed: 6755436 kB' 'SwapCached: 0 kB' 'Active: 4219532 kB' 'Inactive: 191084 kB' 'Active(anon): 3956872 kB' 'Inactive(anon): 0 kB' 'Active(file): 262660 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4093744 kB' 'Mapped: 109528 kB' 'AnonPages: 316988 kB' 'Shmem: 3640000 kB' 'KernelStack: 14872 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131792 kB' 'Slab: 469052 kB' 'SReclaimable: 131792 kB' 'SUnreclaim: 337260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.110 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:50.111 node0=512 expecting 512 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:50.111 node1=512 expecting 512 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:50.111 00:03:50.111 real 0m3.235s 00:03:50.111 user 0m1.082s 00:03:50.111 sys 0m2.122s 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:50.111 00:29:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.111 ************************************ 00:03:50.111 END TEST even_2G_alloc 00:03:50.111 ************************************ 00:03:50.111 00:29:08 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:50.111 00:29:08 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:50.111 00:29:08 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:50.111 00:29:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.111 ************************************ 00:03:50.111 START TEST odd_alloc 00:03:50.111 ************************************ 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.111 00:29:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.417 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:53.417 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.417 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105548664 kB' 'MemAvailable: 108808032 kB' 'Buffers: 2704 kB' 'Cached: 14298116 kB' 'SwapCached: 0 kB' 'Active: 11348540 kB' 'Inactive: 3514596 kB' 'Active(anon): 10936640 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565288 kB' 'Mapped: 167468 kB' 'Shmem: 10374324 kB' 'KReclaimable: 314404 kB' 'Slab: 1157144 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842740 kB' 'KernelStack: 27200 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12387464 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.680 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.681 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105549192 kB' 'MemAvailable: 108808560 kB' 'Buffers: 2704 kB' 'Cached: 14298116 kB' 'SwapCached: 0 kB' 'Active: 11349172 kB' 'Inactive: 3514596 kB' 'Active(anon): 10937272 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565920 kB' 'Mapped: 167456 kB' 'Shmem: 10374324 kB' 'KReclaimable: 314404 kB' 'Slab: 1157144 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842740 kB' 'KernelStack: 27248 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12388720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.949 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.950 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.951 00:29:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105550600 kB' 'MemAvailable: 108809968 kB' 'Buffers: 2704 kB' 'Cached: 14298136 kB' 'SwapCached: 0 kB' 'Active: 11348864 kB' 'Inactive: 3514596 kB' 'Active(anon): 10936964 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565944 kB' 'Mapped: 167376 kB' 'Shmem: 10374344 kB' 'KReclaimable: 314404 kB' 'Slab: 1157128 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842724 kB' 'KernelStack: 27136 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12390348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.951 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.952 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:53.953 nr_hugepages=1025 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.953 resv_hugepages=0 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.953 surplus_hugepages=0 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.953 anon_hugepages=0 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105551116 kB' 'MemAvailable: 108810484 kB' 'Buffers: 2704 kB' 'Cached: 14298156 kB' 'SwapCached: 0 kB' 'Active: 11348232 kB' 'Inactive: 3514596 kB' 'Active(anon): 10936332 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565344 kB' 'Mapped: 167376 kB' 'Shmem: 10374364 kB' 'KReclaimable: 314404 kB' 'Slab: 1157132 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842728 kB' 'KernelStack: 27216 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12390372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.953 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.954 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.955 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51622316 kB' 'MemUsed: 14036692 kB' 'SwapCached: 0 kB' 'Active: 7128928 kB' 'Inactive: 3323512 kB' 'Active(anon): 6979688 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10207116 kB' 'Mapped: 57856 kB' 'AnonPages: 248488 kB' 'Shmem: 6734364 kB' 'KernelStack: 12248 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182612 kB' 'Slab: 687632 kB' 'SReclaimable: 182612 kB' 'SUnreclaim: 505020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.956 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53930368 kB' 'MemUsed: 6749472 kB' 'SwapCached: 0 kB' 'Active: 4219564 kB' 'Inactive: 191084 kB' 'Active(anon): 3956904 kB' 'Inactive(anon): 0 kB' 'Active(file): 262660 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4093764 kB' 'Mapped: 109528 kB' 'AnonPages: 317552 kB' 'Shmem: 3640020 kB' 'KernelStack: 15032 kB' 'PageTables: 4820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131792 kB' 'Slab: 469500 kB' 'SReclaimable: 131792 kB' 'SUnreclaim: 337708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.957 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.958 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.959 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:53.959 node0=512 expecting 513 00:03:53.959 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.959 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.959 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.959 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:53.959 node1=513 expecting 512 00:03:53.959 00:29:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:53.959 00:03:53.959 real 0m3.814s 00:03:53.959 user 0m1.513s 00:03:53.959 sys 0m2.360s 00:03:53.959 00:29:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:53.959 00:29:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.959 ************************************ 00:03:53.959 END TEST odd_alloc 00:03:53.959 ************************************ 00:03:53.959 00:29:12 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:53.959 00:29:12 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:53.959 00:29:12 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:53.959 00:29:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.959 ************************************ 00:03:53.959 START TEST custom_alloc 00:03:53.959 ************************************ 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.959 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.221 00:29:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.770 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:56.770 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:56.770 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.032 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:57.032 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104492756 kB' 'MemAvailable: 107752124 kB' 'Buffers: 2704 kB' 'Cached: 14298288 kB' 'SwapCached: 0 kB' 'Active: 11349884 kB' 'Inactive: 3514596 kB' 'Active(anon): 10937984 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566252 kB' 'Mapped: 167504 kB' 'Shmem: 10374496 kB' 'KReclaimable: 314404 kB' 'Slab: 1157132 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842728 kB' 'KernelStack: 27232 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12388540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.300 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.301 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104492816 kB' 'MemAvailable: 107752184 kB' 'Buffers: 2704 kB' 'Cached: 14298292 kB' 'SwapCached: 0 kB' 'Active: 11349484 kB' 'Inactive: 3514596 kB' 'Active(anon): 10937584 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565916 kB' 'Mapped: 167496 kB' 'Shmem: 10374500 kB' 'KReclaimable: 314404 kB' 'Slab: 1157148 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842744 kB' 'KernelStack: 27216 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12388556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.302 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.303 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104493740 kB' 'MemAvailable: 107753108 kB' 'Buffers: 2704 kB' 'Cached: 14298312 kB' 'SwapCached: 0 kB' 'Active: 11349060 kB' 'Inactive: 3514596 kB' 'Active(anon): 10937160 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565896 kB' 'Mapped: 167420 kB' 'Shmem: 10374520 kB' 'KReclaimable: 314404 kB' 'Slab: 1157144 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842740 kB' 'KernelStack: 27216 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12388580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.304 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.305 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:57.306 nr_hugepages=1536 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.306 resv_hugepages=0 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.306 surplus_hugepages=0 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.306 anon_hugepages=0 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104492732 kB' 'MemAvailable: 107752100 kB' 'Buffers: 2704 kB' 'Cached: 14298332 kB' 'SwapCached: 0 kB' 'Active: 11349084 kB' 'Inactive: 3514596 kB' 'Active(anon): 10937184 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565896 kB' 'Mapped: 167420 kB' 'Shmem: 10374540 kB' 'KReclaimable: 314404 kB' 'Slab: 1157144 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842740 kB' 'KernelStack: 27216 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12391456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.306 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.307 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51643180 kB' 'MemUsed: 14015828 kB' 'SwapCached: 0 kB' 'Active: 7128940 kB' 'Inactive: 3323512 kB' 'Active(anon): 6979700 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10207240 kB' 'Mapped: 57892 kB' 'AnonPages: 248448 kB' 'Shmem: 6734488 kB' 'KernelStack: 12328 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182612 kB' 'Slab: 687572 kB' 'SReclaimable: 182612 kB' 'SUnreclaim: 504960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.308 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.309 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 52858108 kB' 'MemUsed: 7821732 kB' 'SwapCached: 0 kB' 'Active: 4221076 kB' 'Inactive: 191084 kB' 'Active(anon): 3958416 kB' 'Inactive(anon): 0 kB' 'Active(file): 262660 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4093816 kB' 'Mapped: 109548 kB' 'AnonPages: 318476 kB' 'Shmem: 3640072 kB' 'KernelStack: 14888 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131792 kB' 'Slab: 469572 kB' 'SReclaimable: 131792 kB' 'SUnreclaim: 337780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.310 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.311 node0=512 expecting 512 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:57.311 node1=1024 expecting 1024 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:57.311 00:03:57.311 real 0m3.304s 00:03:57.311 user 0m1.197s 00:03:57.311 sys 0m2.040s 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:57.311 00:29:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.311 ************************************ 00:03:57.311 END TEST custom_alloc 00:03:57.311 ************************************ 00:03:57.311 00:29:15 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:57.311 00:29:15 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:57.312 00:29:15 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:57.312 00:29:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.573 ************************************ 00:03:57.573 START TEST no_shrink_alloc 00:03:57.573 ************************************ 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.573 00:29:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.882 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:00.882 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105505348 kB' 'MemAvailable: 108764716 kB' 'Buffers: 2704 kB' 'Cached: 14298464 kB' 'SwapCached: 0 kB' 'Active: 11354260 kB' 'Inactive: 3514596 kB' 'Active(anon): 10942360 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570472 kB' 'Mapped: 168024 kB' 'Shmem: 10374672 kB' 'KReclaimable: 314404 kB' 'Slab: 1157392 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 842988 kB' 'KernelStack: 27360 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12396016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.882 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.883 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105500888 kB' 'MemAvailable: 108760256 kB' 'Buffers: 2704 kB' 'Cached: 14298468 kB' 'SwapCached: 0 kB' 'Active: 11357248 kB' 'Inactive: 3514596 kB' 'Active(anon): 10945348 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573496 kB' 'Mapped: 168020 kB' 'Shmem: 10374676 kB' 'KReclaimable: 314404 kB' 'Slab: 1157532 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 843128 kB' 'KernelStack: 27344 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12398556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235528 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.884 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.885 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105502852 kB' 'MemAvailable: 108762220 kB' 'Buffers: 2704 kB' 'Cached: 14298484 kB' 'SwapCached: 0 kB' 'Active: 11350984 kB' 'Inactive: 3514596 kB' 'Active(anon): 10939084 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567792 kB' 'Mapped: 167436 kB' 'Shmem: 10374692 kB' 'KReclaimable: 314404 kB' 'Slab: 1157464 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 843060 kB' 'KernelStack: 27392 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12390744 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.886 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.887 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.888 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.889 nr_hugepages=1024 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.889 resv_hugepages=0 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.889 surplus_hugepages=0 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.889 anon_hugepages=0 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105503268 kB' 'MemAvailable: 108762636 kB' 'Buffers: 2704 kB' 'Cached: 14298508 kB' 'SwapCached: 0 kB' 'Active: 11350324 kB' 'Inactive: 3514596 kB' 'Active(anon): 10938424 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566988 kB' 'Mapped: 167436 kB' 'Shmem: 10374716 kB' 'KReclaimable: 314404 kB' 'Slab: 1157464 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 843060 kB' 'KernelStack: 27360 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12390768 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.889 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.153 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.154 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50583440 kB' 'MemUsed: 15075568 kB' 'SwapCached: 0 kB' 'Active: 7129216 kB' 'Inactive: 3323512 kB' 'Active(anon): 6979976 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10207328 kB' 'Mapped: 57888 kB' 'AnonPages: 248572 kB' 'Shmem: 6734576 kB' 'KernelStack: 12488 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182612 kB' 'Slab: 687804 kB' 'SReclaimable: 182612 kB' 'SUnreclaim: 505192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.155 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.156 node0=1024 expecting 1024 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.156 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.461 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:04.461 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.461 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105505308 kB' 'MemAvailable: 108764676 kB' 'Buffers: 2704 kB' 'Cached: 14298624 kB' 'SwapCached: 0 kB' 'Active: 11351036 kB' 'Inactive: 3514596 kB' 'Active(anon): 10939136 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567636 kB' 'Mapped: 167452 kB' 'Shmem: 10374832 kB' 'KReclaimable: 314404 kB' 'Slab: 1157892 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 843488 kB' 'KernelStack: 27232 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12391528 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.729 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.730 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105509676 kB' 'MemAvailable: 108769044 kB' 'Buffers: 2704 kB' 'Cached: 14298628 kB' 'SwapCached: 0 kB' 'Active: 11352068 kB' 'Inactive: 3514596 kB' 'Active(anon): 10940168 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568680 kB' 'Mapped: 167456 kB' 'Shmem: 10374836 kB' 'KReclaimable: 314404 kB' 'Slab: 1157880 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 843476 kB' 'KernelStack: 27360 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12393280 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.731 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.732 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105516936 kB' 'MemAvailable: 108776304 kB' 'Buffers: 2704 kB' 'Cached: 14298644 kB' 'SwapCached: 0 kB' 'Active: 11352036 kB' 'Inactive: 3514596 kB' 'Active(anon): 10940136 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568640 kB' 'Mapped: 167448 kB' 'Shmem: 10374852 kB' 'KReclaimable: 314404 kB' 'Slab: 1158048 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 843644 kB' 'KernelStack: 27360 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12391568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.733 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.734 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.735 nr_hugepages=1024 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.735 resv_hugepages=0 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.735 surplus_hugepages=0 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.735 anon_hugepages=0 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105518152 kB' 'MemAvailable: 108777520 kB' 'Buffers: 2704 kB' 'Cached: 14298668 kB' 'SwapCached: 0 kB' 'Active: 11352148 kB' 'Inactive: 3514596 kB' 'Active(anon): 10940248 kB' 'Inactive(anon): 0 kB' 'Active(file): 411900 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568660 kB' 'Mapped: 167448 kB' 'Shmem: 10374876 kB' 'KReclaimable: 314404 kB' 'Slab: 1158048 kB' 'SReclaimable: 314404 kB' 'SUnreclaim: 843644 kB' 'KernelStack: 27408 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12391592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 112896 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4392308 kB' 'DirectMap2M: 29890560 kB' 'DirectMap1G: 101711872 kB' 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.735 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.736 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.737 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50575596 kB' 'MemUsed: 15083412 kB' 'SwapCached: 0 kB' 'Active: 7130636 kB' 'Inactive: 3323512 kB' 'Active(anon): 6981396 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3323512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10207364 kB' 'Mapped: 57908 kB' 'AnonPages: 249916 kB' 'Shmem: 6734612 kB' 'KernelStack: 12424 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182612 kB' 'Slab: 688168 kB' 'SReclaimable: 182612 kB' 'SUnreclaim: 505556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.738 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.739 node0=1024 expecting 1024 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.739 00:04:04.739 real 0m7.370s 00:04:04.739 user 0m2.818s 00:04:04.739 sys 0m4.627s 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:04.739 00:29:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.739 ************************************ 00:04:04.739 END TEST no_shrink_alloc 00:04:04.739 ************************************ 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.001 00:29:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.001 00:04:05.001 real 0m25.839s 00:04:05.001 user 0m9.660s 00:04:05.001 sys 0m16.306s 00:04:05.001 00:29:23 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:05.001 00:29:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.001 ************************************ 00:04:05.001 END TEST hugepages 00:04:05.001 ************************************ 00:04:05.001 00:29:23 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:05.001 00:29:23 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:05.001 00:29:23 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:05.001 00:29:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.001 ************************************ 00:04:05.001 START TEST driver 00:04:05.001 ************************************ 00:04:05.001 00:29:23 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:05.001 * Looking for test storage... 00:04:05.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:05.001 00:29:23 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:05.001 00:29:23 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.001 00:29:23 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.352 00:29:27 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:10.352 00:29:27 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:10.352 00:29:27 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:10.352 00:29:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.352 ************************************ 00:04:10.352 START TEST guess_driver 00:04:10.352 ************************************ 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:10.352 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:10.352 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:10.352 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:10.352 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:10.352 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:10.352 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:10.352 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:10.352 Looking for driver=vfio-pci 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.352 00:29:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.899 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.899 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.899 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.160 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.161 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.733 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:13.733 00:29:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:13.733 00:29:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.733 00:29:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.944 00:04:17.944 real 0m8.132s 00:04:17.944 user 0m2.533s 00:04:17.944 sys 0m4.697s 00:04:17.944 00:29:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:17.944 00:29:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:17.944 ************************************ 00:04:17.944 END TEST guess_driver 00:04:17.944 ************************************ 00:04:17.944 00:04:17.944 real 0m12.965s 00:04:17.944 user 0m3.875s 00:04:17.944 sys 0m7.381s 00:04:17.944 00:29:36 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:17.944 00:29:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:17.944 ************************************ 00:04:17.944 END TEST driver 00:04:17.944 ************************************ 00:04:17.944 00:29:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:17.944 00:29:36 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:17.944 00:29:36 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:17.944 00:29:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.944 ************************************ 00:04:17.944 START TEST devices 00:04:17.944 ************************************ 00:04:17.944 00:29:36 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:17.944 * Looking for test storage... 00:04:18.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:18.205 00:29:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:18.205 00:29:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:18.205 00:29:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.205 00:29:36 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:22.418 00:29:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:22.418 00:29:39 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:22.418 No valid GPT data, bailing 00:04:22.418 00:29:39 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.418 00:29:39 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:22.418 00:29:39 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:22.418 00:29:39 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:22.418 00:29:39 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:22.418 00:29:39 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:22.418 00:29:39 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:22.418 00:29:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:22.418 ************************************ 00:04:22.418 START TEST nvme_mount 00:04:22.418 ************************************ 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.418 00:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:22.418 00:29:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:22.418 00:29:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:22.990 Creating new GPT entries in memory. 00:04:22.990 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.990 other utilities. 00:04:22.990 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.990 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.990 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.990 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.990 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:23.935 Creating new GPT entries in memory. 00:04:23.935 The operation has completed successfully. 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 166566 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.935 00:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.239 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.239 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.239 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.239 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.239 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.239 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.240 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:27.500 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.500 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.761 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:27.761 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:27.761 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:27.761 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.761 00:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.066 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.327 00:29:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.690 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.690 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.690 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.690 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.690 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.690 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.690 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.690 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.691 00:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.951 00:29:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.951 00:29:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:34.951 00:29:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:34.951 00:29:53 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:34.951 00:29:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.951 00:29:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.951 00:29:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.951 00:29:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.951 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.951 00:04:34.951 real 0m13.207s 00:04:34.951 user 0m4.068s 00:04:34.951 sys 0m6.998s 00:04:34.951 00:29:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:34.951 00:29:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:34.951 ************************************ 00:04:34.951 END TEST nvme_mount 00:04:34.951 ************************************ 00:04:35.212 00:29:53 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:35.212 00:29:53 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:35.212 00:29:53 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:35.212 00:29:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.212 ************************************ 00:04:35.212 START TEST dm_mount 00:04:35.212 ************************************ 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.212 00:29:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:36.156 Creating new GPT entries in memory. 00:04:36.156 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:36.156 other utilities. 00:04:36.156 00:29:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:36.156 00:29:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.156 00:29:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.156 00:29:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.156 00:29:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:37.100 Creating new GPT entries in memory. 00:04:37.100 The operation has completed successfully. 00:04:37.100 00:29:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:37.100 00:29:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.100 00:29:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:37.100 00:29:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:37.100 00:29:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:38.487 The operation has completed successfully. 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 171588 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.487 00:29:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.792 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.793 00:29:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.793 00:30:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:45.095 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.355 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.355 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:45.355 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:45.355 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:45.355 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.355 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:45.355 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:45.616 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.616 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:45.616 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:45.616 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:45.616 00:30:03 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:45.616 00:04:45.616 real 0m10.403s 00:04:45.616 user 0m2.760s 00:04:45.616 sys 0m4.700s 00:04:45.616 00:30:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:45.616 00:30:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:45.616 ************************************ 00:04:45.616 END TEST dm_mount 00:04:45.616 ************************************ 00:04:45.616 00:30:03 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:45.616 00:30:03 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:45.616 00:30:03 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.616 00:30:03 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.616 00:30:03 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:45.616 00:30:03 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.616 00:30:03 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:45.877 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:45.877 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:45.877 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:45.877 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:45.877 00:30:03 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:45.877 00:30:03 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.877 00:30:03 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:45.877 00:30:03 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.877 00:30:03 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:45.877 00:30:03 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.877 00:30:03 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:45.877 00:04:45.877 real 0m27.872s 00:04:45.877 user 0m8.277s 00:04:45.877 sys 0m14.367s 00:04:45.877 00:30:04 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:45.877 00:30:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:45.877 ************************************ 00:04:45.877 END TEST devices 00:04:45.877 ************************************ 00:04:45.877 00:04:45.877 real 1m31.984s 00:04:45.877 user 0m29.850s 00:04:45.877 sys 0m52.932s 00:04:45.877 00:30:04 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:45.877 00:30:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.877 ************************************ 00:04:45.877 END TEST setup.sh 00:04:45.877 ************************************ 00:04:45.877 00:30:04 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:49.182 Hugepages 00:04:49.182 node hugesize free / total 00:04:49.182 node0 1048576kB 0 / 0 00:04:49.183 node0 2048kB 2048 / 2048 00:04:49.183 node1 1048576kB 0 / 0 00:04:49.183 node1 2048kB 0 / 0 00:04:49.183 00:04:49.183 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.183 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:49.183 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:49.183 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:49.183 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:49.183 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:49.183 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:49.183 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:49.183 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:49.444 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:49.444 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:49.444 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:49.444 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:49.444 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:49.444 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:49.444 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:49.444 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:49.444 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:49.444 00:30:07 -- spdk/autotest.sh@130 -- # uname -s 00:04:49.444 00:30:07 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:49.444 00:30:07 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:49.444 00:30:07 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.746 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.746 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.746 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.746 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.746 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.746 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.746 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.746 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:53.007 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:53.007 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:53.007 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:53.007 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:53.007 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:53.007 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:53.007 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:53.007 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:54.927 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:54.927 00:30:13 -- common/autotest_common.sh@1531 -- # sleep 1 00:04:56.344 00:30:14 -- common/autotest_common.sh@1532 -- # bdfs=() 00:04:56.344 00:30:14 -- common/autotest_common.sh@1532 -- # local bdfs 00:04:56.344 00:30:14 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:04:56.344 00:30:14 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:04:56.344 00:30:14 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:56.344 00:30:14 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:56.344 00:30:14 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:56.344 00:30:14 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:56.344 00:30:14 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:56.344 00:30:14 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:56.344 00:30:14 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:04:56.344 00:30:14 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.890 Waiting for block devices as requested 00:04:58.890 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:58.890 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:58.890 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:59.152 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:59.152 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:59.152 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:59.412 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:59.412 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:59.412 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:59.673 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:59.673 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:59.673 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:59.934 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:59.934 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:59.934 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:59.934 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:00.195 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:00.455 00:30:18 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:00.455 00:30:18 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:00.455 00:30:18 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:05:00.455 00:30:18 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:05:00.455 00:30:18 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:00.455 00:30:18 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:00.455 00:30:18 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:00.455 00:30:18 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:05:00.455 00:30:18 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:05:00.455 00:30:18 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:05:00.455 00:30:18 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:05:00.455 00:30:18 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:00.455 00:30:18 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:00.455 00:30:18 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:05:00.455 00:30:18 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:00.455 00:30:18 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:00.455 00:30:18 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:05:00.455 00:30:18 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:00.455 00:30:18 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:00.455 00:30:18 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:00.455 00:30:18 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:00.455 00:30:18 -- common/autotest_common.sh@1556 -- # continue 00:05:00.455 00:30:18 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:00.455 00:30:18 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:00.455 00:30:18 -- common/autotest_common.sh@10 -- # set +x 00:05:00.455 00:30:18 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:00.455 00:30:18 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:00.455 00:30:18 -- common/autotest_common.sh@10 -- # set +x 00:05:00.455 00:30:18 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.760 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:03.760 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:04.334 00:30:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:04.334 00:30:22 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:04.334 00:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:04.334 00:30:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:04.334 00:30:22 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:05:04.334 00:30:22 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:05:04.334 00:30:22 -- common/autotest_common.sh@1576 -- # bdfs=() 00:05:04.334 00:30:22 -- common/autotest_common.sh@1576 -- # local bdfs 00:05:04.334 00:30:22 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:05:04.334 00:30:22 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:04.334 00:30:22 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:04.334 00:30:22 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.334 00:30:22 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:04.334 00:30:22 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:04.334 00:30:22 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:04.334 00:30:22 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:05:04.334 00:30:22 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:04.334 00:30:22 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:04.334 00:30:22 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:05:04.334 00:30:22 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:04.334 00:30:22 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:05:04.334 00:30:22 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:05:04.334 00:30:22 -- common/autotest_common.sh@1592 -- # return 0 00:05:04.334 00:30:22 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:04.334 00:30:22 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:04.334 00:30:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:04.334 00:30:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:04.334 00:30:22 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:04.334 00:30:22 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:04.334 00:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:04.334 00:30:22 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:04.334 00:30:22 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:04.334 00:30:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:04.334 00:30:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:04.334 00:30:22 -- common/autotest_common.sh@10 -- # set +x 00:05:04.334 ************************************ 00:05:04.334 START TEST env 00:05:04.334 ************************************ 00:05:04.334 00:30:22 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:04.334 * Looking for test storage... 00:05:04.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:04.334 00:30:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.334 00:30:22 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:04.334 00:30:22 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:04.334 00:30:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.596 ************************************ 00:05:04.596 START TEST env_memory 00:05:04.596 ************************************ 00:05:04.596 00:30:22 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.596 00:05:04.596 00:05:04.596 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.596 http://cunit.sourceforge.net/ 00:05:04.596 00:05:04.596 00:05:04.596 Suite: memory 00:05:04.596 Test: alloc and free memory map ...[2024-06-08 00:30:22.675065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:04.596 passed 00:05:04.596 Test: mem map translation ...[2024-06-08 00:30:22.700601] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:04.596 [2024-06-08 00:30:22.700628] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:04.596 [2024-06-08 00:30:22.700675] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:04.596 [2024-06-08 00:30:22.700683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:04.596 passed 00:05:04.596 Test: mem map registration ...[2024-06-08 00:30:22.755858] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:04.596 [2024-06-08 00:30:22.755880] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:04.596 passed 00:05:04.596 Test: mem map adjacent registrations ...passed 00:05:04.596 00:05:04.596 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.596 suites 1 1 n/a 0 0 00:05:04.596 tests 4 4 4 0 0 00:05:04.596 asserts 152 152 152 0 n/a 00:05:04.596 00:05:04.596 Elapsed time = 0.196 seconds 00:05:04.596 00:05:04.596 real 0m0.208s 00:05:04.596 user 0m0.196s 00:05:04.596 sys 0m0.011s 00:05:04.596 00:30:22 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:04.596 00:30:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:04.596 ************************************ 00:05:04.596 END TEST env_memory 00:05:04.596 ************************************ 00:05:04.596 00:30:22 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.596 00:30:22 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:04.596 00:30:22 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:04.596 00:30:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.857 ************************************ 00:05:04.857 START TEST env_vtophys 00:05:04.857 ************************************ 00:05:04.857 00:30:22 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.857 EAL: lib.eal log level changed from notice to debug 00:05:04.857 EAL: Detected lcore 0 as core 0 on socket 0 00:05:04.857 EAL: Detected lcore 1 as core 1 on socket 0 00:05:04.857 EAL: Detected lcore 2 as core 2 on socket 0 00:05:04.857 EAL: Detected lcore 3 as core 3 on socket 0 00:05:04.857 EAL: Detected lcore 4 as core 4 on socket 0 00:05:04.857 EAL: Detected lcore 5 as core 5 on socket 0 00:05:04.857 EAL: Detected lcore 6 as core 6 on socket 0 00:05:04.857 EAL: Detected lcore 7 as core 7 on socket 0 00:05:04.857 EAL: Detected lcore 8 as core 8 on socket 0 00:05:04.857 EAL: Detected lcore 9 as core 9 on socket 0 00:05:04.857 EAL: Detected lcore 10 as core 10 on socket 0 00:05:04.857 EAL: Detected lcore 11 as core 11 on socket 0 00:05:04.857 EAL: Detected lcore 12 as core 12 on socket 0 00:05:04.857 EAL: Detected lcore 13 as core 13 on socket 0 00:05:04.857 EAL: Detected lcore 14 as core 14 on socket 0 00:05:04.857 EAL: Detected lcore 15 as core 15 on socket 0 00:05:04.857 EAL: Detected lcore 16 as core 16 on socket 0 00:05:04.857 EAL: Detected lcore 17 as core 17 on socket 0 00:05:04.857 EAL: Detected lcore 18 as core 18 on socket 0 00:05:04.857 EAL: Detected lcore 19 as core 19 on socket 0 00:05:04.857 EAL: Detected lcore 20 as core 20 on socket 0 00:05:04.857 EAL: Detected lcore 21 as core 21 on socket 0 00:05:04.857 EAL: Detected lcore 22 as core 22 on socket 0 00:05:04.857 EAL: Detected lcore 23 as core 23 on socket 0 00:05:04.857 EAL: Detected lcore 24 as core 24 on socket 0 00:05:04.858 EAL: Detected lcore 25 as core 25 on socket 0 00:05:04.858 EAL: Detected lcore 26 as core 26 on socket 0 00:05:04.858 EAL: Detected lcore 27 as core 27 on socket 0 00:05:04.858 EAL: Detected lcore 28 as core 28 on socket 0 00:05:04.858 EAL: Detected lcore 29 as core 29 on socket 0 00:05:04.858 EAL: Detected lcore 30 as core 30 on socket 0 00:05:04.858 EAL: Detected lcore 31 as core 31 on socket 0 00:05:04.858 EAL: Detected lcore 32 as core 32 on socket 0 00:05:04.858 EAL: Detected lcore 33 as core 33 on socket 0 00:05:04.858 EAL: Detected lcore 34 as core 34 on socket 0 00:05:04.858 EAL: Detected lcore 35 as core 35 on socket 0 00:05:04.858 EAL: Detected lcore 36 as core 0 on socket 1 00:05:04.858 EAL: Detected lcore 37 as core 1 on socket 1 00:05:04.858 EAL: Detected lcore 38 as core 2 on socket 1 00:05:04.858 EAL: Detected lcore 39 as core 3 on socket 1 00:05:04.858 EAL: Detected lcore 40 as core 4 on socket 1 00:05:04.858 EAL: Detected lcore 41 as core 5 on socket 1 00:05:04.858 EAL: Detected lcore 42 as core 6 on socket 1 00:05:04.858 EAL: Detected lcore 43 as core 7 on socket 1 00:05:04.858 EAL: Detected lcore 44 as core 8 on socket 1 00:05:04.858 EAL: Detected lcore 45 as core 9 on socket 1 00:05:04.858 EAL: Detected lcore 46 as core 10 on socket 1 00:05:04.858 EAL: Detected lcore 47 as core 11 on socket 1 00:05:04.858 EAL: Detected lcore 48 as core 12 on socket 1 00:05:04.858 EAL: Detected lcore 49 as core 13 on socket 1 00:05:04.858 EAL: Detected lcore 50 as core 14 on socket 1 00:05:04.858 EAL: Detected lcore 51 as core 15 on socket 1 00:05:04.858 EAL: Detected lcore 52 as core 16 on socket 1 00:05:04.858 EAL: Detected lcore 53 as core 17 on socket 1 00:05:04.858 EAL: Detected lcore 54 as core 18 on socket 1 00:05:04.858 EAL: Detected lcore 55 as core 19 on socket 1 00:05:04.858 EAL: Detected lcore 56 as core 20 on socket 1 00:05:04.858 EAL: Detected lcore 57 as core 21 on socket 1 00:05:04.858 EAL: Detected lcore 58 as core 22 on socket 1 00:05:04.858 EAL: Detected lcore 59 as core 23 on socket 1 00:05:04.858 EAL: Detected lcore 60 as core 24 on socket 1 00:05:04.858 EAL: Detected lcore 61 as core 25 on socket 1 00:05:04.858 EAL: Detected lcore 62 as core 26 on socket 1 00:05:04.858 EAL: Detected lcore 63 as core 27 on socket 1 00:05:04.858 EAL: Detected lcore 64 as core 28 on socket 1 00:05:04.858 EAL: Detected lcore 65 as core 29 on socket 1 00:05:04.858 EAL: Detected lcore 66 as core 30 on socket 1 00:05:04.858 EAL: Detected lcore 67 as core 31 on socket 1 00:05:04.858 EAL: Detected lcore 68 as core 32 on socket 1 00:05:04.858 EAL: Detected lcore 69 as core 33 on socket 1 00:05:04.858 EAL: Detected lcore 70 as core 34 on socket 1 00:05:04.858 EAL: Detected lcore 71 as core 35 on socket 1 00:05:04.858 EAL: Detected lcore 72 as core 0 on socket 0 00:05:04.858 EAL: Detected lcore 73 as core 1 on socket 0 00:05:04.858 EAL: Detected lcore 74 as core 2 on socket 0 00:05:04.858 EAL: Detected lcore 75 as core 3 on socket 0 00:05:04.858 EAL: Detected lcore 76 as core 4 on socket 0 00:05:04.858 EAL: Detected lcore 77 as core 5 on socket 0 00:05:04.858 EAL: Detected lcore 78 as core 6 on socket 0 00:05:04.858 EAL: Detected lcore 79 as core 7 on socket 0 00:05:04.858 EAL: Detected lcore 80 as core 8 on socket 0 00:05:04.858 EAL: Detected lcore 81 as core 9 on socket 0 00:05:04.858 EAL: Detected lcore 82 as core 10 on socket 0 00:05:04.858 EAL: Detected lcore 83 as core 11 on socket 0 00:05:04.858 EAL: Detected lcore 84 as core 12 on socket 0 00:05:04.858 EAL: Detected lcore 85 as core 13 on socket 0 00:05:04.858 EAL: Detected lcore 86 as core 14 on socket 0 00:05:04.858 EAL: Detected lcore 87 as core 15 on socket 0 00:05:04.858 EAL: Detected lcore 88 as core 16 on socket 0 00:05:04.858 EAL: Detected lcore 89 as core 17 on socket 0 00:05:04.858 EAL: Detected lcore 90 as core 18 on socket 0 00:05:04.858 EAL: Detected lcore 91 as core 19 on socket 0 00:05:04.858 EAL: Detected lcore 92 as core 20 on socket 0 00:05:04.858 EAL: Detected lcore 93 as core 21 on socket 0 00:05:04.858 EAL: Detected lcore 94 as core 22 on socket 0 00:05:04.858 EAL: Detected lcore 95 as core 23 on socket 0 00:05:04.858 EAL: Detected lcore 96 as core 24 on socket 0 00:05:04.858 EAL: Detected lcore 97 as core 25 on socket 0 00:05:04.858 EAL: Detected lcore 98 as core 26 on socket 0 00:05:04.858 EAL: Detected lcore 99 as core 27 on socket 0 00:05:04.858 EAL: Detected lcore 100 as core 28 on socket 0 00:05:04.858 EAL: Detected lcore 101 as core 29 on socket 0 00:05:04.858 EAL: Detected lcore 102 as core 30 on socket 0 00:05:04.858 EAL: Detected lcore 103 as core 31 on socket 0 00:05:04.858 EAL: Detected lcore 104 as core 32 on socket 0 00:05:04.858 EAL: Detected lcore 105 as core 33 on socket 0 00:05:04.858 EAL: Detected lcore 106 as core 34 on socket 0 00:05:04.858 EAL: Detected lcore 107 as core 35 on socket 0 00:05:04.858 EAL: Detected lcore 108 as core 0 on socket 1 00:05:04.858 EAL: Detected lcore 109 as core 1 on socket 1 00:05:04.858 EAL: Detected lcore 110 as core 2 on socket 1 00:05:04.858 EAL: Detected lcore 111 as core 3 on socket 1 00:05:04.858 EAL: Detected lcore 112 as core 4 on socket 1 00:05:04.858 EAL: Detected lcore 113 as core 5 on socket 1 00:05:04.858 EAL: Detected lcore 114 as core 6 on socket 1 00:05:04.858 EAL: Detected lcore 115 as core 7 on socket 1 00:05:04.858 EAL: Detected lcore 116 as core 8 on socket 1 00:05:04.858 EAL: Detected lcore 117 as core 9 on socket 1 00:05:04.858 EAL: Detected lcore 118 as core 10 on socket 1 00:05:04.858 EAL: Detected lcore 119 as core 11 on socket 1 00:05:04.858 EAL: Detected lcore 120 as core 12 on socket 1 00:05:04.858 EAL: Detected lcore 121 as core 13 on socket 1 00:05:04.858 EAL: Detected lcore 122 as core 14 on socket 1 00:05:04.858 EAL: Detected lcore 123 as core 15 on socket 1 00:05:04.858 EAL: Detected lcore 124 as core 16 on socket 1 00:05:04.858 EAL: Detected lcore 125 as core 17 on socket 1 00:05:04.858 EAL: Detected lcore 126 as core 18 on socket 1 00:05:04.858 EAL: Detected lcore 127 as core 19 on socket 1 00:05:04.858 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:04.858 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:04.858 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:04.858 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:04.858 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:04.858 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:04.858 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:04.858 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:04.858 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:04.858 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:04.858 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:04.858 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:04.858 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:04.858 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:04.858 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:04.858 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:04.858 EAL: Maximum logical cores by configuration: 128 00:05:04.858 EAL: Detected CPU lcores: 128 00:05:04.858 EAL: Detected NUMA nodes: 2 00:05:04.858 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:04.858 EAL: Detected shared linkage of DPDK 00:05:04.858 EAL: No shared files mode enabled, IPC will be disabled 00:05:04.858 EAL: Bus pci wants IOVA as 'DC' 00:05:04.858 EAL: Buses did not request a specific IOVA mode. 00:05:04.858 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:04.858 EAL: Selected IOVA mode 'VA' 00:05:04.858 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.858 EAL: Probing VFIO support... 00:05:04.858 EAL: IOMMU type 1 (Type 1) is supported 00:05:04.858 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:04.858 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:04.858 EAL: VFIO support initialized 00:05:04.858 EAL: Ask a virtual area of 0x2e000 bytes 00:05:04.858 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:04.858 EAL: Setting up physically contiguous memory... 00:05:04.858 EAL: Setting maximum number of open files to 524288 00:05:04.858 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:04.858 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:04.858 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:04.858 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.858 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:04.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.858 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.858 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:04.858 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:04.858 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.858 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:04.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.858 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.858 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:04.858 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:04.858 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.858 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:04.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.858 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.858 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:04.858 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:04.858 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.858 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:04.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.858 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.858 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:04.858 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:04.858 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:04.858 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.858 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:04.858 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.858 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.858 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:04.858 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:04.858 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.858 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:04.858 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.858 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.858 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:04.858 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:04.859 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.859 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:04.859 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.859 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.859 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:04.859 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:04.859 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.859 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:04.859 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.859 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.859 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:04.859 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:04.859 EAL: Hugepages will be freed exactly as allocated. 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: TSC frequency is ~2400000 KHz 00:05:04.859 EAL: Main lcore 0 is ready (tid=7f5cdca01a00;cpuset=[0]) 00:05:04.859 EAL: Trying to obtain current memory policy. 00:05:04.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.859 EAL: Restoring previous memory policy: 0 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was expanded by 2MB 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:04.859 EAL: Mem event callback 'spdk:(nil)' registered 00:05:04.859 00:05:04.859 00:05:04.859 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.859 http://cunit.sourceforge.net/ 00:05:04.859 00:05:04.859 00:05:04.859 Suite: components_suite 00:05:04.859 Test: vtophys_malloc_test ...passed 00:05:04.859 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:04.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.859 EAL: Restoring previous memory policy: 4 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was expanded by 4MB 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was shrunk by 4MB 00:05:04.859 EAL: Trying to obtain current memory policy. 00:05:04.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.859 EAL: Restoring previous memory policy: 4 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was expanded by 6MB 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was shrunk by 6MB 00:05:04.859 EAL: Trying to obtain current memory policy. 00:05:04.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.859 EAL: Restoring previous memory policy: 4 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was expanded by 10MB 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was shrunk by 10MB 00:05:04.859 EAL: Trying to obtain current memory policy. 00:05:04.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.859 EAL: Restoring previous memory policy: 4 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was expanded by 18MB 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was shrunk by 18MB 00:05:04.859 EAL: Trying to obtain current memory policy. 00:05:04.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.859 EAL: Restoring previous memory policy: 4 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was expanded by 34MB 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was shrunk by 34MB 00:05:04.859 EAL: Trying to obtain current memory policy. 00:05:04.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.859 EAL: Restoring previous memory policy: 4 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was expanded by 66MB 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was shrunk by 66MB 00:05:04.859 EAL: Trying to obtain current memory policy. 00:05:04.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.859 EAL: Restoring previous memory policy: 4 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was expanded by 130MB 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was shrunk by 130MB 00:05:04.859 EAL: Trying to obtain current memory policy. 00:05:04.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.859 EAL: Restoring previous memory policy: 4 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was expanded by 258MB 00:05:04.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.859 EAL: request: mp_malloc_sync 00:05:04.859 EAL: No shared files mode enabled, IPC is disabled 00:05:04.859 EAL: Heap on socket 0 was shrunk by 258MB 00:05:04.859 EAL: Trying to obtain current memory policy. 00:05:04.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.120 EAL: Restoring previous memory policy: 4 00:05:05.120 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.120 EAL: request: mp_malloc_sync 00:05:05.120 EAL: No shared files mode enabled, IPC is disabled 00:05:05.120 EAL: Heap on socket 0 was expanded by 514MB 00:05:05.120 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.120 EAL: request: mp_malloc_sync 00:05:05.120 EAL: No shared files mode enabled, IPC is disabled 00:05:05.120 EAL: Heap on socket 0 was shrunk by 514MB 00:05:05.120 EAL: Trying to obtain current memory policy. 00:05:05.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.379 EAL: Restoring previous memory policy: 4 00:05:05.380 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.380 EAL: request: mp_malloc_sync 00:05:05.380 EAL: No shared files mode enabled, IPC is disabled 00:05:05.380 EAL: Heap on socket 0 was expanded by 1026MB 00:05:05.380 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.639 EAL: request: mp_malloc_sync 00:05:05.639 EAL: No shared files mode enabled, IPC is disabled 00:05:05.639 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:05.639 passed 00:05:05.639 00:05:05.639 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.639 suites 1 1 n/a 0 0 00:05:05.639 tests 2 2 2 0 0 00:05:05.639 asserts 497 497 497 0 n/a 00:05:05.639 00:05:05.639 Elapsed time = 0.644 seconds 00:05:05.639 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.639 EAL: request: mp_malloc_sync 00:05:05.639 EAL: No shared files mode enabled, IPC is disabled 00:05:05.639 EAL: Heap on socket 0 was shrunk by 2MB 00:05:05.639 EAL: No shared files mode enabled, IPC is disabled 00:05:05.639 EAL: No shared files mode enabled, IPC is disabled 00:05:05.639 EAL: No shared files mode enabled, IPC is disabled 00:05:05.639 00:05:05.639 real 0m0.769s 00:05:05.639 user 0m0.405s 00:05:05.639 sys 0m0.326s 00:05:05.639 00:30:23 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:05.639 00:30:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:05.639 ************************************ 00:05:05.639 END TEST env_vtophys 00:05:05.639 ************************************ 00:05:05.639 00:30:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.639 00:30:23 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:05.639 00:30:23 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:05.639 00:30:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.639 ************************************ 00:05:05.639 START TEST env_pci 00:05:05.639 ************************************ 00:05:05.639 00:30:23 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.639 00:05:05.640 00:05:05.640 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.640 http://cunit.sourceforge.net/ 00:05:05.640 00:05:05.640 00:05:05.640 Suite: pci 00:05:05.640 Test: pci_hook ...[2024-06-08 00:30:23.763108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 183486 has claimed it 00:05:05.640 EAL: Cannot find device (10000:00:01.0) 00:05:05.640 EAL: Failed to attach device on primary process 00:05:05.640 passed 00:05:05.640 00:05:05.640 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.640 suites 1 1 n/a 0 0 00:05:05.640 tests 1 1 1 0 0 00:05:05.640 asserts 25 25 25 0 n/a 00:05:05.640 00:05:05.640 Elapsed time = 0.029 seconds 00:05:05.640 00:05:05.640 real 0m0.049s 00:05:05.640 user 0m0.015s 00:05:05.640 sys 0m0.034s 00:05:05.640 00:30:23 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:05.640 00:30:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:05.640 ************************************ 00:05:05.640 END TEST env_pci 00:05:05.640 ************************************ 00:05:05.640 00:30:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:05.640 00:30:23 env -- env/env.sh@15 -- # uname 00:05:05.640 00:30:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:05.640 00:30:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:05.640 00:30:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.640 00:30:23 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:05.640 00:30:23 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:05.640 00:30:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.640 ************************************ 00:05:05.640 START TEST env_dpdk_post_init 00:05:05.640 ************************************ 00:05:05.640 00:30:23 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.640 EAL: Detected CPU lcores: 128 00:05:05.640 EAL: Detected NUMA nodes: 2 00:05:05.640 EAL: Detected shared linkage of DPDK 00:05:05.640 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.900 EAL: Selected IOVA mode 'VA' 00:05:05.900 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.900 EAL: VFIO support initialized 00:05:05.900 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.900 EAL: Using IOMMU type 1 (Type 1) 00:05:05.900 EAL: Ignore mapping IO port bar(1) 00:05:06.161 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:06.161 EAL: Ignore mapping IO port bar(1) 00:05:06.161 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:06.422 EAL: Ignore mapping IO port bar(1) 00:05:06.422 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:06.683 EAL: Ignore mapping IO port bar(1) 00:05:06.683 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:06.945 EAL: Ignore mapping IO port bar(1) 00:05:06.945 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:07.205 EAL: Ignore mapping IO port bar(1) 00:05:07.205 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:07.205 EAL: Ignore mapping IO port bar(1) 00:05:07.466 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:07.467 EAL: Ignore mapping IO port bar(1) 00:05:07.726 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:07.726 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:07.986 EAL: Ignore mapping IO port bar(1) 00:05:07.986 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:08.246 EAL: Ignore mapping IO port bar(1) 00:05:08.246 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:08.506 EAL: Ignore mapping IO port bar(1) 00:05:08.506 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:08.765 EAL: Ignore mapping IO port bar(1) 00:05:08.765 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:08.765 EAL: Ignore mapping IO port bar(1) 00:05:09.024 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:09.024 EAL: Ignore mapping IO port bar(1) 00:05:09.284 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:09.284 EAL: Ignore mapping IO port bar(1) 00:05:09.543 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:09.543 EAL: Ignore mapping IO port bar(1) 00:05:09.543 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:09.543 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:09.543 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:09.804 Starting DPDK initialization... 00:05:09.804 Starting SPDK post initialization... 00:05:09.804 SPDK NVMe probe 00:05:09.804 Attaching to 0000:65:00.0 00:05:09.804 Attached to 0000:65:00.0 00:05:09.804 Cleaning up... 00:05:11.719 00:05:11.719 real 0m5.716s 00:05:11.719 user 0m0.190s 00:05:11.719 sys 0m0.068s 00:05:11.719 00:30:29 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:11.719 00:30:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.719 ************************************ 00:05:11.719 END TEST env_dpdk_post_init 00:05:11.719 ************************************ 00:05:11.719 00:30:29 env -- env/env.sh@26 -- # uname 00:05:11.719 00:30:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.719 00:30:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.719 00:30:29 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:11.719 00:30:29 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:11.719 00:30:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.719 ************************************ 00:05:11.719 START TEST env_mem_callbacks 00:05:11.719 ************************************ 00:05:11.719 00:30:29 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.719 EAL: Detected CPU lcores: 128 00:05:11.719 EAL: Detected NUMA nodes: 2 00:05:11.719 EAL: Detected shared linkage of DPDK 00:05:11.719 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.719 EAL: Selected IOVA mode 'VA' 00:05:11.719 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.719 EAL: VFIO support initialized 00:05:11.719 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.719 00:05:11.719 00:05:11.719 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.719 http://cunit.sourceforge.net/ 00:05:11.719 00:05:11.719 00:05:11.719 Suite: memory 00:05:11.719 Test: test ... 00:05:11.719 register 0x200000200000 2097152 00:05:11.719 malloc 3145728 00:05:11.719 register 0x200000400000 4194304 00:05:11.719 buf 0x200000500000 len 3145728 PASSED 00:05:11.719 malloc 64 00:05:11.719 buf 0x2000004fff40 len 64 PASSED 00:05:11.719 malloc 4194304 00:05:11.719 register 0x200000800000 6291456 00:05:11.719 buf 0x200000a00000 len 4194304 PASSED 00:05:11.719 free 0x200000500000 3145728 00:05:11.719 free 0x2000004fff40 64 00:05:11.719 unregister 0x200000400000 4194304 PASSED 00:05:11.719 free 0x200000a00000 4194304 00:05:11.719 unregister 0x200000800000 6291456 PASSED 00:05:11.719 malloc 8388608 00:05:11.719 register 0x200000400000 10485760 00:05:11.719 buf 0x200000600000 len 8388608 PASSED 00:05:11.719 free 0x200000600000 8388608 00:05:11.719 unregister 0x200000400000 10485760 PASSED 00:05:11.719 passed 00:05:11.719 00:05:11.719 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.719 suites 1 1 n/a 0 0 00:05:11.719 tests 1 1 1 0 0 00:05:11.719 asserts 15 15 15 0 n/a 00:05:11.719 00:05:11.719 Elapsed time = 0.005 seconds 00:05:11.719 00:05:11.719 real 0m0.056s 00:05:11.719 user 0m0.021s 00:05:11.719 sys 0m0.035s 00:05:11.719 00:30:29 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:11.719 00:30:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:11.719 ************************************ 00:05:11.719 END TEST env_mem_callbacks 00:05:11.719 ************************************ 00:05:11.719 00:05:11.719 real 0m7.266s 00:05:11.719 user 0m0.993s 00:05:11.719 sys 0m0.799s 00:05:11.719 00:30:29 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:11.719 00:30:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.719 ************************************ 00:05:11.719 END TEST env 00:05:11.719 ************************************ 00:05:11.719 00:30:29 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.719 00:30:29 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:11.719 00:30:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:11.719 00:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:11.719 ************************************ 00:05:11.719 START TEST rpc 00:05:11.719 ************************************ 00:05:11.719 00:30:29 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.719 * Looking for test storage... 00:05:11.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.719 00:30:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=184650 00:05:11.719 00:30:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.719 00:30:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:11.719 00:30:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 184650 00:05:11.719 00:30:29 rpc -- common/autotest_common.sh@830 -- # '[' -z 184650 ']' 00:05:11.719 00:30:29 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.719 00:30:29 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:11.719 00:30:29 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.719 00:30:29 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:11.719 00:30:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.719 [2024-06-08 00:30:29.983921] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:11.719 [2024-06-08 00:30:29.983982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184650 ] 00:05:11.979 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.979 [2024-06-08 00:30:30.054440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.979 [2024-06-08 00:30:30.128960] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:11.979 [2024-06-08 00:30:30.129002] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 184650' to capture a snapshot of events at runtime. 00:05:11.979 [2024-06-08 00:30:30.129010] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:11.979 [2024-06-08 00:30:30.129017] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:11.979 [2024-06-08 00:30:30.129023] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid184650 for offline analysis/debug. 00:05:11.979 [2024-06-08 00:30:30.129053] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.550 00:30:30 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:12.550 00:30:30 rpc -- common/autotest_common.sh@863 -- # return 0 00:05:12.550 00:30:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.550 00:30:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.550 00:30:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:12.550 00:30:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:12.550 00:30:30 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:12.550 00:30:30 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:12.550 00:30:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.550 ************************************ 00:05:12.550 START TEST rpc_integrity 00:05:12.550 ************************************ 00:05:12.550 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:12.550 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.550 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:12.550 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.550 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:12.550 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.550 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.811 { 00:05:12.811 "name": "Malloc0", 00:05:12.811 "aliases": [ 00:05:12.811 "d36d3c61-3bed-4d50-a652-a0918bc4a941" 00:05:12.811 ], 00:05:12.811 "product_name": "Malloc disk", 00:05:12.811 "block_size": 512, 00:05:12.811 "num_blocks": 16384, 00:05:12.811 "uuid": "d36d3c61-3bed-4d50-a652-a0918bc4a941", 00:05:12.811 "assigned_rate_limits": { 00:05:12.811 "rw_ios_per_sec": 0, 00:05:12.811 "rw_mbytes_per_sec": 0, 00:05:12.811 "r_mbytes_per_sec": 0, 00:05:12.811 "w_mbytes_per_sec": 0 00:05:12.811 }, 00:05:12.811 "claimed": false, 00:05:12.811 "zoned": false, 00:05:12.811 "supported_io_types": { 00:05:12.811 "read": true, 00:05:12.811 "write": true, 00:05:12.811 "unmap": true, 00:05:12.811 "write_zeroes": true, 00:05:12.811 "flush": true, 00:05:12.811 "reset": true, 00:05:12.811 "compare": false, 00:05:12.811 "compare_and_write": false, 00:05:12.811 "abort": true, 00:05:12.811 "nvme_admin": false, 00:05:12.811 "nvme_io": false 00:05:12.811 }, 00:05:12.811 "memory_domains": [ 00:05:12.811 { 00:05:12.811 "dma_device_id": "system", 00:05:12.811 "dma_device_type": 1 00:05:12.811 }, 00:05:12.811 { 00:05:12.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.811 "dma_device_type": 2 00:05:12.811 } 00:05:12.811 ], 00:05:12.811 "driver_specific": {} 00:05:12.811 } 00:05:12.811 ]' 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.811 [2024-06-08 00:30:30.933852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:12.811 [2024-06-08 00:30:30.933886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.811 [2024-06-08 00:30:30.933899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bc3630 00:05:12.811 [2024-06-08 00:30:30.933907] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.811 [2024-06-08 00:30:30.935204] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.811 [2024-06-08 00:30:30.935226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.811 Passthru0 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.811 00:30:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.811 { 00:05:12.811 "name": "Malloc0", 00:05:12.811 "aliases": [ 00:05:12.811 "d36d3c61-3bed-4d50-a652-a0918bc4a941" 00:05:12.811 ], 00:05:12.811 "product_name": "Malloc disk", 00:05:12.811 "block_size": 512, 00:05:12.811 "num_blocks": 16384, 00:05:12.811 "uuid": "d36d3c61-3bed-4d50-a652-a0918bc4a941", 00:05:12.811 "assigned_rate_limits": { 00:05:12.811 "rw_ios_per_sec": 0, 00:05:12.811 "rw_mbytes_per_sec": 0, 00:05:12.811 "r_mbytes_per_sec": 0, 00:05:12.811 "w_mbytes_per_sec": 0 00:05:12.811 }, 00:05:12.811 "claimed": true, 00:05:12.811 "claim_type": "exclusive_write", 00:05:12.811 "zoned": false, 00:05:12.811 "supported_io_types": { 00:05:12.811 "read": true, 00:05:12.811 "write": true, 00:05:12.811 "unmap": true, 00:05:12.811 "write_zeroes": true, 00:05:12.811 "flush": true, 00:05:12.811 "reset": true, 00:05:12.811 "compare": false, 00:05:12.811 "compare_and_write": false, 00:05:12.811 "abort": true, 00:05:12.811 "nvme_admin": false, 00:05:12.811 "nvme_io": false 00:05:12.811 }, 00:05:12.811 "memory_domains": [ 00:05:12.811 { 00:05:12.811 "dma_device_id": "system", 00:05:12.811 "dma_device_type": 1 00:05:12.811 }, 00:05:12.811 { 00:05:12.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.811 "dma_device_type": 2 00:05:12.811 } 00:05:12.811 ], 00:05:12.811 "driver_specific": {} 00:05:12.811 }, 00:05:12.811 { 00:05:12.811 "name": "Passthru0", 00:05:12.811 "aliases": [ 00:05:12.811 "7aa7d328-8aa1-588c-b1b3-f2e62f15d732" 00:05:12.811 ], 00:05:12.811 "product_name": "passthru", 00:05:12.811 "block_size": 512, 00:05:12.811 "num_blocks": 16384, 00:05:12.811 "uuid": "7aa7d328-8aa1-588c-b1b3-f2e62f15d732", 00:05:12.811 "assigned_rate_limits": { 00:05:12.811 "rw_ios_per_sec": 0, 00:05:12.811 "rw_mbytes_per_sec": 0, 00:05:12.811 "r_mbytes_per_sec": 0, 00:05:12.811 "w_mbytes_per_sec": 0 00:05:12.811 }, 00:05:12.811 "claimed": false, 00:05:12.811 "zoned": false, 00:05:12.811 "supported_io_types": { 00:05:12.811 "read": true, 00:05:12.811 "write": true, 00:05:12.811 "unmap": true, 00:05:12.811 "write_zeroes": true, 00:05:12.811 "flush": true, 00:05:12.811 "reset": true, 00:05:12.811 "compare": false, 00:05:12.811 "compare_and_write": false, 00:05:12.811 "abort": true, 00:05:12.811 "nvme_admin": false, 00:05:12.811 "nvme_io": false 00:05:12.811 }, 00:05:12.811 "memory_domains": [ 00:05:12.811 { 00:05:12.811 "dma_device_id": "system", 00:05:12.811 "dma_device_type": 1 00:05:12.811 }, 00:05:12.811 { 00:05:12.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.811 "dma_device_type": 2 00:05:12.811 } 00:05:12.811 ], 00:05:12.811 "driver_specific": { 00:05:12.811 "passthru": { 00:05:12.811 "name": "Passthru0", 00:05:12.811 "base_bdev_name": "Malloc0" 00:05:12.811 } 00:05:12.811 } 00:05:12.811 } 00:05:12.811 ]' 00:05:12.811 00:30:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.811 00:30:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.811 00:30:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:12.811 00:30:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:12.811 00:30:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:12.811 00:30:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.811 00:30:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.811 00:30:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.811 00:05:12.811 real 0m0.290s 00:05:12.811 user 0m0.185s 00:05:12.811 sys 0m0.040s 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:12.811 00:30:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.811 ************************************ 00:05:12.811 END TEST rpc_integrity 00:05:12.811 ************************************ 00:05:13.096 00:30:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.096 00:30:31 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.096 00:30:31 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.096 00:30:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.096 ************************************ 00:05:13.096 START TEST rpc_plugins 00:05:13.096 ************************************ 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.096 { 00:05:13.096 "name": "Malloc1", 00:05:13.096 "aliases": [ 00:05:13.096 "b610cdc6-fdf8-4e79-b861-20b6d959d3cf" 00:05:13.096 ], 00:05:13.096 "product_name": "Malloc disk", 00:05:13.096 "block_size": 4096, 00:05:13.096 "num_blocks": 256, 00:05:13.096 "uuid": "b610cdc6-fdf8-4e79-b861-20b6d959d3cf", 00:05:13.096 "assigned_rate_limits": { 00:05:13.096 "rw_ios_per_sec": 0, 00:05:13.096 "rw_mbytes_per_sec": 0, 00:05:13.096 "r_mbytes_per_sec": 0, 00:05:13.096 "w_mbytes_per_sec": 0 00:05:13.096 }, 00:05:13.096 "claimed": false, 00:05:13.096 "zoned": false, 00:05:13.096 "supported_io_types": { 00:05:13.096 "read": true, 00:05:13.096 "write": true, 00:05:13.096 "unmap": true, 00:05:13.096 "write_zeroes": true, 00:05:13.096 "flush": true, 00:05:13.096 "reset": true, 00:05:13.096 "compare": false, 00:05:13.096 "compare_and_write": false, 00:05:13.096 "abort": true, 00:05:13.096 "nvme_admin": false, 00:05:13.096 "nvme_io": false 00:05:13.096 }, 00:05:13.096 "memory_domains": [ 00:05:13.096 { 00:05:13.096 "dma_device_id": "system", 00:05:13.096 "dma_device_type": 1 00:05:13.096 }, 00:05:13.096 { 00:05:13.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.096 "dma_device_type": 2 00:05:13.096 } 00:05:13.096 ], 00:05:13.096 "driver_specific": {} 00:05:13.096 } 00:05:13.096 ]' 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:13.096 00:30:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.096 00:05:13.096 real 0m0.145s 00:05:13.096 user 0m0.093s 00:05:13.096 sys 0m0.019s 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.096 00:30:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.096 ************************************ 00:05:13.096 END TEST rpc_plugins 00:05:13.096 ************************************ 00:05:13.096 00:30:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.096 00:30:31 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.096 00:30:31 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.096 00:30:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.367 ************************************ 00:05:13.367 START TEST rpc_trace_cmd_test 00:05:13.367 ************************************ 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:13.367 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid184650", 00:05:13.367 "tpoint_group_mask": "0x8", 00:05:13.367 "iscsi_conn": { 00:05:13.367 "mask": "0x2", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "scsi": { 00:05:13.367 "mask": "0x4", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "bdev": { 00:05:13.367 "mask": "0x8", 00:05:13.367 "tpoint_mask": "0xffffffffffffffff" 00:05:13.367 }, 00:05:13.367 "nvmf_rdma": { 00:05:13.367 "mask": "0x10", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "nvmf_tcp": { 00:05:13.367 "mask": "0x20", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "ftl": { 00:05:13.367 "mask": "0x40", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "blobfs": { 00:05:13.367 "mask": "0x80", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "dsa": { 00:05:13.367 "mask": "0x200", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "thread": { 00:05:13.367 "mask": "0x400", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "nvme_pcie": { 00:05:13.367 "mask": "0x800", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "iaa": { 00:05:13.367 "mask": "0x1000", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "nvme_tcp": { 00:05:13.367 "mask": "0x2000", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "bdev_nvme": { 00:05:13.367 "mask": "0x4000", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 }, 00:05:13.367 "sock": { 00:05:13.367 "mask": "0x8000", 00:05:13.367 "tpoint_mask": "0x0" 00:05:13.367 } 00:05:13.367 }' 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.367 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.368 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.368 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.368 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.368 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.368 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.368 00:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.368 00:05:13.368 real 0m0.251s 00:05:13.368 user 0m0.212s 00:05:13.368 sys 0m0.031s 00:05:13.368 00:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.368 00:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.368 ************************************ 00:05:13.368 END TEST rpc_trace_cmd_test 00:05:13.368 ************************************ 00:05:13.628 00:30:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.628 00:30:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.628 00:30:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.628 00:30:31 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.628 00:30:31 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.628 00:30:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.628 ************************************ 00:05:13.628 START TEST rpc_daemon_integrity 00:05:13.628 ************************************ 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.628 { 00:05:13.628 "name": "Malloc2", 00:05:13.628 "aliases": [ 00:05:13.628 "dbabe8f4-2e5c-4625-9a29-e85fac6d7b84" 00:05:13.628 ], 00:05:13.628 "product_name": "Malloc disk", 00:05:13.628 "block_size": 512, 00:05:13.628 "num_blocks": 16384, 00:05:13.628 "uuid": "dbabe8f4-2e5c-4625-9a29-e85fac6d7b84", 00:05:13.628 "assigned_rate_limits": { 00:05:13.628 "rw_ios_per_sec": 0, 00:05:13.628 "rw_mbytes_per_sec": 0, 00:05:13.628 "r_mbytes_per_sec": 0, 00:05:13.628 "w_mbytes_per_sec": 0 00:05:13.628 }, 00:05:13.628 "claimed": false, 00:05:13.628 "zoned": false, 00:05:13.628 "supported_io_types": { 00:05:13.628 "read": true, 00:05:13.628 "write": true, 00:05:13.628 "unmap": true, 00:05:13.628 "write_zeroes": true, 00:05:13.628 "flush": true, 00:05:13.628 "reset": true, 00:05:13.628 "compare": false, 00:05:13.628 "compare_and_write": false, 00:05:13.628 "abort": true, 00:05:13.628 "nvme_admin": false, 00:05:13.628 "nvme_io": false 00:05:13.628 }, 00:05:13.628 "memory_domains": [ 00:05:13.628 { 00:05:13.628 "dma_device_id": "system", 00:05:13.628 "dma_device_type": 1 00:05:13.628 }, 00:05:13.628 { 00:05:13.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.628 "dma_device_type": 2 00:05:13.628 } 00:05:13.628 ], 00:05:13.628 "driver_specific": {} 00:05:13.628 } 00:05:13.628 ]' 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.628 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.628 [2024-06-08 00:30:31.840293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:13.628 [2024-06-08 00:30:31.840323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.629 [2024-06-08 00:30:31.840335] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bc4e40 00:05:13.629 [2024-06-08 00:30:31.840341] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.629 [2024-06-08 00:30:31.841548] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.629 [2024-06-08 00:30:31.841568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.629 Passthru0 00:05:13.629 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.629 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.629 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.629 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.629 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.629 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.629 { 00:05:13.629 "name": "Malloc2", 00:05:13.629 "aliases": [ 00:05:13.629 "dbabe8f4-2e5c-4625-9a29-e85fac6d7b84" 00:05:13.629 ], 00:05:13.629 "product_name": "Malloc disk", 00:05:13.629 "block_size": 512, 00:05:13.629 "num_blocks": 16384, 00:05:13.629 "uuid": "dbabe8f4-2e5c-4625-9a29-e85fac6d7b84", 00:05:13.629 "assigned_rate_limits": { 00:05:13.629 "rw_ios_per_sec": 0, 00:05:13.629 "rw_mbytes_per_sec": 0, 00:05:13.629 "r_mbytes_per_sec": 0, 00:05:13.629 "w_mbytes_per_sec": 0 00:05:13.629 }, 00:05:13.629 "claimed": true, 00:05:13.629 "claim_type": "exclusive_write", 00:05:13.629 "zoned": false, 00:05:13.629 "supported_io_types": { 00:05:13.629 "read": true, 00:05:13.629 "write": true, 00:05:13.629 "unmap": true, 00:05:13.629 "write_zeroes": true, 00:05:13.629 "flush": true, 00:05:13.629 "reset": true, 00:05:13.629 "compare": false, 00:05:13.629 "compare_and_write": false, 00:05:13.629 "abort": true, 00:05:13.629 "nvme_admin": false, 00:05:13.629 "nvme_io": false 00:05:13.629 }, 00:05:13.629 "memory_domains": [ 00:05:13.629 { 00:05:13.629 "dma_device_id": "system", 00:05:13.629 "dma_device_type": 1 00:05:13.629 }, 00:05:13.629 { 00:05:13.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.629 "dma_device_type": 2 00:05:13.629 } 00:05:13.629 ], 00:05:13.629 "driver_specific": {} 00:05:13.629 }, 00:05:13.629 { 00:05:13.629 "name": "Passthru0", 00:05:13.629 "aliases": [ 00:05:13.629 "2b78165b-ce0c-5688-b9e7-fcc59a7891b4" 00:05:13.629 ], 00:05:13.629 "product_name": "passthru", 00:05:13.629 "block_size": 512, 00:05:13.629 "num_blocks": 16384, 00:05:13.629 "uuid": "2b78165b-ce0c-5688-b9e7-fcc59a7891b4", 00:05:13.629 "assigned_rate_limits": { 00:05:13.629 "rw_ios_per_sec": 0, 00:05:13.629 "rw_mbytes_per_sec": 0, 00:05:13.629 "r_mbytes_per_sec": 0, 00:05:13.629 "w_mbytes_per_sec": 0 00:05:13.629 }, 00:05:13.629 "claimed": false, 00:05:13.629 "zoned": false, 00:05:13.629 "supported_io_types": { 00:05:13.629 "read": true, 00:05:13.629 "write": true, 00:05:13.629 "unmap": true, 00:05:13.629 "write_zeroes": true, 00:05:13.629 "flush": true, 00:05:13.629 "reset": true, 00:05:13.629 "compare": false, 00:05:13.629 "compare_and_write": false, 00:05:13.629 "abort": true, 00:05:13.629 "nvme_admin": false, 00:05:13.629 "nvme_io": false 00:05:13.629 }, 00:05:13.629 "memory_domains": [ 00:05:13.629 { 00:05:13.629 "dma_device_id": "system", 00:05:13.629 "dma_device_type": 1 00:05:13.629 }, 00:05:13.629 { 00:05:13.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.629 "dma_device_type": 2 00:05:13.629 } 00:05:13.629 ], 00:05:13.629 "driver_specific": { 00:05:13.629 "passthru": { 00:05:13.629 "name": "Passthru0", 00:05:13.629 "base_bdev_name": "Malloc2" 00:05:13.629 } 00:05:13.629 } 00:05:13.629 } 00:05:13.629 ]' 00:05:13.629 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.891 00:05:13.891 real 0m0.280s 00:05:13.891 user 0m0.186s 00:05:13.891 sys 0m0.030s 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.891 00:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.891 ************************************ 00:05:13.891 END TEST rpc_daemon_integrity 00:05:13.891 ************************************ 00:05:13.891 00:30:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.891 00:30:32 rpc -- rpc/rpc.sh@84 -- # killprocess 184650 00:05:13.891 00:30:32 rpc -- common/autotest_common.sh@949 -- # '[' -z 184650 ']' 00:05:13.891 00:30:32 rpc -- common/autotest_common.sh@953 -- # kill -0 184650 00:05:13.891 00:30:32 rpc -- common/autotest_common.sh@954 -- # uname 00:05:13.891 00:30:32 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:13.891 00:30:32 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 184650 00:05:13.891 00:30:32 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:13.891 00:30:32 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:13.891 00:30:32 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 184650' 00:05:13.891 killing process with pid 184650 00:05:13.891 00:30:32 rpc -- common/autotest_common.sh@968 -- # kill 184650 00:05:13.891 00:30:32 rpc -- common/autotest_common.sh@973 -- # wait 184650 00:05:14.152 00:05:14.152 real 0m2.448s 00:05:14.152 user 0m3.250s 00:05:14.152 sys 0m0.662s 00:05:14.152 00:30:32 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:14.152 00:30:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.152 ************************************ 00:05:14.152 END TEST rpc 00:05:14.152 ************************************ 00:05:14.152 00:30:32 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.152 00:30:32 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:14.152 00:30:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.152 00:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:14.152 ************************************ 00:05:14.152 START TEST skip_rpc 00:05:14.152 ************************************ 00:05:14.152 00:30:32 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.413 * Looking for test storage... 00:05:14.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.413 00:30:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.413 00:30:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.413 00:30:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:14.413 00:30:32 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:14.413 00:30:32 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.413 00:30:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.413 ************************************ 00:05:14.413 START TEST skip_rpc 00:05:14.413 ************************************ 00:05:14.413 00:30:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:05:14.413 00:30:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=185493 00:05:14.413 00:30:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:14.413 00:30:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.413 00:30:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:14.413 [2024-06-08 00:30:32.544727] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:14.413 [2024-06-08 00:30:32.544782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185493 ] 00:05:14.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.413 [2024-06-08 00:30:32.610112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.413 [2024-06-08 00:30:32.683542] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 185493 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 185493 ']' 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 185493 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 185493 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 185493' 00:05:19.699 killing process with pid 185493 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 185493 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 185493 00:05:19.699 00:05:19.699 real 0m5.276s 00:05:19.699 user 0m5.076s 00:05:19.699 sys 0m0.240s 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:19.699 00:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.699 ************************************ 00:05:19.699 END TEST skip_rpc 00:05:19.699 ************************************ 00:05:19.699 00:30:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.699 00:30:37 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:19.699 00:30:37 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:19.699 00:30:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.699 ************************************ 00:05:19.699 START TEST skip_rpc_with_json 00:05:19.699 ************************************ 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=186525 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 186525 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 186525 ']' 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:19.699 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.699 [2024-06-08 00:30:37.894300] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:19.699 [2024-06-08 00:30:37.894347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186525 ] 00:05:19.699 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.699 [2024-06-08 00:30:37.954095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.959 [2024-06-08 00:30:38.018339] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.529 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:20.529 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:20.529 00:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.529 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:20.529 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.530 [2024-06-08 00:30:38.665700] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.530 request: 00:05:20.530 { 00:05:20.530 "trtype": "tcp", 00:05:20.530 "method": "nvmf_get_transports", 00:05:20.530 "req_id": 1 00:05:20.530 } 00:05:20.530 Got JSON-RPC error response 00:05:20.530 response: 00:05:20.530 { 00:05:20.530 "code": -19, 00:05:20.530 "message": "No such device" 00:05:20.530 } 00:05:20.530 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:20.530 00:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.530 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:20.530 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.530 [2024-06-08 00:30:38.677811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.530 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:20.530 00:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.530 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:20.530 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.790 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:20.790 00:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.790 { 00:05:20.790 "subsystems": [ 00:05:20.790 { 00:05:20.790 "subsystem": "keyring", 00:05:20.790 "config": [] 00:05:20.790 }, 00:05:20.790 { 00:05:20.790 "subsystem": "iobuf", 00:05:20.790 "config": [ 00:05:20.790 { 00:05:20.790 "method": "iobuf_set_options", 00:05:20.790 "params": { 00:05:20.790 "small_pool_count": 8192, 00:05:20.790 "large_pool_count": 1024, 00:05:20.790 "small_bufsize": 8192, 00:05:20.790 "large_bufsize": 135168 00:05:20.790 } 00:05:20.790 } 00:05:20.790 ] 00:05:20.790 }, 00:05:20.790 { 00:05:20.790 "subsystem": "sock", 00:05:20.790 "config": [ 00:05:20.790 { 00:05:20.790 "method": "sock_set_default_impl", 00:05:20.790 "params": { 00:05:20.790 "impl_name": "posix" 00:05:20.790 } 00:05:20.790 }, 00:05:20.790 { 00:05:20.790 "method": "sock_impl_set_options", 00:05:20.790 "params": { 00:05:20.790 "impl_name": "ssl", 00:05:20.790 "recv_buf_size": 4096, 00:05:20.790 "send_buf_size": 4096, 00:05:20.790 "enable_recv_pipe": true, 00:05:20.790 "enable_quickack": false, 00:05:20.790 "enable_placement_id": 0, 00:05:20.790 "enable_zerocopy_send_server": true, 00:05:20.790 "enable_zerocopy_send_client": false, 00:05:20.790 "zerocopy_threshold": 0, 00:05:20.790 "tls_version": 0, 00:05:20.790 "enable_ktls": false 00:05:20.790 } 00:05:20.790 }, 00:05:20.790 { 00:05:20.790 "method": "sock_impl_set_options", 00:05:20.790 "params": { 00:05:20.790 "impl_name": "posix", 00:05:20.790 "recv_buf_size": 2097152, 00:05:20.790 "send_buf_size": 2097152, 00:05:20.790 "enable_recv_pipe": true, 00:05:20.790 "enable_quickack": false, 00:05:20.790 "enable_placement_id": 0, 00:05:20.790 "enable_zerocopy_send_server": true, 00:05:20.790 "enable_zerocopy_send_client": false, 00:05:20.790 "zerocopy_threshold": 0, 00:05:20.790 "tls_version": 0, 00:05:20.790 "enable_ktls": false 00:05:20.790 } 00:05:20.790 } 00:05:20.790 ] 00:05:20.790 }, 00:05:20.790 { 00:05:20.790 "subsystem": "vmd", 00:05:20.790 "config": [] 00:05:20.790 }, 00:05:20.790 { 00:05:20.790 "subsystem": "accel", 00:05:20.790 "config": [ 00:05:20.790 { 00:05:20.790 "method": "accel_set_options", 00:05:20.790 "params": { 00:05:20.790 "small_cache_size": 128, 00:05:20.790 "large_cache_size": 16, 00:05:20.790 "task_count": 2048, 00:05:20.790 "sequence_count": 2048, 00:05:20.790 "buf_count": 2048 00:05:20.790 } 00:05:20.790 } 00:05:20.790 ] 00:05:20.790 }, 00:05:20.790 { 00:05:20.790 "subsystem": "bdev", 00:05:20.790 "config": [ 00:05:20.790 { 00:05:20.790 "method": "bdev_set_options", 00:05:20.790 "params": { 00:05:20.790 "bdev_io_pool_size": 65535, 00:05:20.790 "bdev_io_cache_size": 256, 00:05:20.790 "bdev_auto_examine": true, 00:05:20.790 "iobuf_small_cache_size": 128, 00:05:20.790 "iobuf_large_cache_size": 16 00:05:20.790 } 00:05:20.790 }, 00:05:20.790 { 00:05:20.790 "method": "bdev_raid_set_options", 00:05:20.790 "params": { 00:05:20.790 "process_window_size_kb": 1024 00:05:20.790 } 00:05:20.790 }, 00:05:20.790 { 00:05:20.790 "method": "bdev_iscsi_set_options", 00:05:20.790 "params": { 00:05:20.790 "timeout_sec": 30 00:05:20.790 } 00:05:20.790 }, 00:05:20.790 { 00:05:20.790 "method": "bdev_nvme_set_options", 00:05:20.790 "params": { 00:05:20.790 "action_on_timeout": "none", 00:05:20.790 "timeout_us": 0, 00:05:20.790 "timeout_admin_us": 0, 00:05:20.790 "keep_alive_timeout_ms": 10000, 00:05:20.790 "arbitration_burst": 0, 00:05:20.790 "low_priority_weight": 0, 00:05:20.790 "medium_priority_weight": 0, 00:05:20.790 "high_priority_weight": 0, 00:05:20.791 "nvme_adminq_poll_period_us": 10000, 00:05:20.791 "nvme_ioq_poll_period_us": 0, 00:05:20.791 "io_queue_requests": 0, 00:05:20.791 "delay_cmd_submit": true, 00:05:20.791 "transport_retry_count": 4, 00:05:20.791 "bdev_retry_count": 3, 00:05:20.791 "transport_ack_timeout": 0, 00:05:20.791 "ctrlr_loss_timeout_sec": 0, 00:05:20.791 "reconnect_delay_sec": 0, 00:05:20.791 "fast_io_fail_timeout_sec": 0, 00:05:20.791 "disable_auto_failback": false, 00:05:20.791 "generate_uuids": false, 00:05:20.791 "transport_tos": 0, 00:05:20.791 "nvme_error_stat": false, 00:05:20.791 "rdma_srq_size": 0, 00:05:20.791 "io_path_stat": false, 00:05:20.791 "allow_accel_sequence": false, 00:05:20.791 "rdma_max_cq_size": 0, 00:05:20.791 "rdma_cm_event_timeout_ms": 0, 00:05:20.791 "dhchap_digests": [ 00:05:20.791 "sha256", 00:05:20.791 "sha384", 00:05:20.791 "sha512" 00:05:20.791 ], 00:05:20.791 "dhchap_dhgroups": [ 00:05:20.791 "null", 00:05:20.791 "ffdhe2048", 00:05:20.791 "ffdhe3072", 00:05:20.791 "ffdhe4096", 00:05:20.791 "ffdhe6144", 00:05:20.791 "ffdhe8192" 00:05:20.791 ] 00:05:20.791 } 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "method": "bdev_nvme_set_hotplug", 00:05:20.791 "params": { 00:05:20.791 "period_us": 100000, 00:05:20.791 "enable": false 00:05:20.791 } 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "method": "bdev_wait_for_examine" 00:05:20.791 } 00:05:20.791 ] 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "subsystem": "scsi", 00:05:20.791 "config": null 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "subsystem": "scheduler", 00:05:20.791 "config": [ 00:05:20.791 { 00:05:20.791 "method": "framework_set_scheduler", 00:05:20.791 "params": { 00:05:20.791 "name": "static" 00:05:20.791 } 00:05:20.791 } 00:05:20.791 ] 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "subsystem": "vhost_scsi", 00:05:20.791 "config": [] 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "subsystem": "vhost_blk", 00:05:20.791 "config": [] 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "subsystem": "ublk", 00:05:20.791 "config": [] 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "subsystem": "nbd", 00:05:20.791 "config": [] 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "subsystem": "nvmf", 00:05:20.791 "config": [ 00:05:20.791 { 00:05:20.791 "method": "nvmf_set_config", 00:05:20.791 "params": { 00:05:20.791 "discovery_filter": "match_any", 00:05:20.791 "admin_cmd_passthru": { 00:05:20.791 "identify_ctrlr": false 00:05:20.791 } 00:05:20.791 } 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "method": "nvmf_set_max_subsystems", 00:05:20.791 "params": { 00:05:20.791 "max_subsystems": 1024 00:05:20.791 } 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "method": "nvmf_set_crdt", 00:05:20.791 "params": { 00:05:20.791 "crdt1": 0, 00:05:20.791 "crdt2": 0, 00:05:20.791 "crdt3": 0 00:05:20.791 } 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "method": "nvmf_create_transport", 00:05:20.791 "params": { 00:05:20.791 "trtype": "TCP", 00:05:20.791 "max_queue_depth": 128, 00:05:20.791 "max_io_qpairs_per_ctrlr": 127, 00:05:20.791 "in_capsule_data_size": 4096, 00:05:20.791 "max_io_size": 131072, 00:05:20.791 "io_unit_size": 131072, 00:05:20.791 "max_aq_depth": 128, 00:05:20.791 "num_shared_buffers": 511, 00:05:20.791 "buf_cache_size": 4294967295, 00:05:20.791 "dif_insert_or_strip": false, 00:05:20.791 "zcopy": false, 00:05:20.791 "c2h_success": true, 00:05:20.791 "sock_priority": 0, 00:05:20.791 "abort_timeout_sec": 1, 00:05:20.791 "ack_timeout": 0, 00:05:20.791 "data_wr_pool_size": 0 00:05:20.791 } 00:05:20.791 } 00:05:20.791 ] 00:05:20.791 }, 00:05:20.791 { 00:05:20.791 "subsystem": "iscsi", 00:05:20.791 "config": [ 00:05:20.791 { 00:05:20.791 "method": "iscsi_set_options", 00:05:20.791 "params": { 00:05:20.791 "node_base": "iqn.2016-06.io.spdk", 00:05:20.791 "max_sessions": 128, 00:05:20.791 "max_connections_per_session": 2, 00:05:20.791 "max_queue_depth": 64, 00:05:20.791 "default_time2wait": 2, 00:05:20.791 "default_time2retain": 20, 00:05:20.791 "first_burst_length": 8192, 00:05:20.791 "immediate_data": true, 00:05:20.791 "allow_duplicated_isid": false, 00:05:20.791 "error_recovery_level": 0, 00:05:20.791 "nop_timeout": 60, 00:05:20.791 "nop_in_interval": 30, 00:05:20.791 "disable_chap": false, 00:05:20.791 "require_chap": false, 00:05:20.791 "mutual_chap": false, 00:05:20.791 "chap_group": 0, 00:05:20.791 "max_large_datain_per_connection": 64, 00:05:20.791 "max_r2t_per_connection": 4, 00:05:20.791 "pdu_pool_size": 36864, 00:05:20.791 "immediate_data_pool_size": 16384, 00:05:20.791 "data_out_pool_size": 2048 00:05:20.791 } 00:05:20.791 } 00:05:20.791 ] 00:05:20.791 } 00:05:20.791 ] 00:05:20.791 } 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 186525 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 186525 ']' 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 186525 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 186525 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 186525' 00:05:20.791 killing process with pid 186525 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 186525 00:05:20.791 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 186525 00:05:21.051 00:30:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=186870 00:05:21.051 00:30:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:21.051 00:30:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 186870 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 186870 ']' 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 186870 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 186870 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 186870' 00:05:26.338 killing process with pid 186870 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 186870 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 186870 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.338 00:05:26.338 real 0m6.531s 00:05:26.338 user 0m6.395s 00:05:26.338 sys 0m0.553s 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.338 ************************************ 00:05:26.338 END TEST skip_rpc_with_json 00:05:26.338 ************************************ 00:05:26.338 00:30:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.338 00:30:44 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:26.338 00:30:44 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:26.338 00:30:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.338 ************************************ 00:05:26.338 START TEST skip_rpc_with_delay 00:05:26.338 ************************************ 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.338 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.339 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.339 [2024-06-08 00:30:44.512349] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.339 [2024-06-08 00:30:44.512443] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:26.339 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:26.339 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:26.339 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:26.339 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:26.339 00:05:26.339 real 0m0.079s 00:05:26.339 user 0m0.048s 00:05:26.339 sys 0m0.031s 00:05:26.339 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:26.339 00:30:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.339 ************************************ 00:05:26.339 END TEST skip_rpc_with_delay 00:05:26.339 ************************************ 00:05:26.339 00:30:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.339 00:30:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.339 00:30:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.339 00:30:44 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:26.339 00:30:44 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:26.339 00:30:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.339 ************************************ 00:05:26.339 START TEST exit_on_failed_rpc_init 00:05:26.339 ************************************ 00:05:26.339 00:30:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:26.339 00:30:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=187937 00:05:26.339 00:30:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 187937 00:05:26.339 00:30:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.339 00:30:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 187937 ']' 00:05:26.339 00:30:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.339 00:30:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:26.339 00:30:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.339 00:30:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:26.339 00:30:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.599 [2024-06-08 00:30:44.649594] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:26.599 [2024-06-08 00:30:44.649639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187937 ] 00:05:26.599 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.599 [2024-06-08 00:30:44.707483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.599 [2024-06-08 00:30:44.772388] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:27.168 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.429 [2024-06-08 00:30:45.462964] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:27.429 [2024-06-08 00:30:45.463013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188069 ] 00:05:27.429 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.429 [2024-06-08 00:30:45.519433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.429 [2024-06-08 00:30:45.583840] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.429 [2024-06-08 00:30:45.583900] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:27.429 [2024-06-08 00:30:45.583909] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:27.429 [2024-06-08 00:30:45.583915] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 187937 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 187937 ']' 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 187937 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 187937 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 187937' 00:05:27.429 killing process with pid 187937 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 187937 00:05:27.429 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 187937 00:05:27.690 00:05:27.690 real 0m1.292s 00:05:27.690 user 0m1.510s 00:05:27.691 sys 0m0.334s 00:05:27.691 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:27.691 00:30:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.691 ************************************ 00:05:27.691 END TEST exit_on_failed_rpc_init 00:05:27.691 ************************************ 00:05:27.691 00:30:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.691 00:05:27.691 real 0m13.570s 00:05:27.691 user 0m13.175s 00:05:27.691 sys 0m1.428s 00:05:27.691 00:30:45 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:27.691 00:30:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.691 ************************************ 00:05:27.691 END TEST skip_rpc 00:05:27.691 ************************************ 00:05:27.691 00:30:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.691 00:30:45 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:27.691 00:30:45 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:27.691 00:30:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.952 ************************************ 00:05:27.952 START TEST rpc_client 00:05:27.952 ************************************ 00:05:27.952 00:30:46 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.952 * Looking for test storage... 00:05:27.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:27.952 00:30:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:27.952 OK 00:05:27.952 00:30:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.952 00:05:27.952 real 0m0.121s 00:05:27.952 user 0m0.057s 00:05:27.952 sys 0m0.072s 00:05:27.952 00:30:46 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:27.952 00:30:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:27.952 ************************************ 00:05:27.952 END TEST rpc_client 00:05:27.952 ************************************ 00:05:27.952 00:30:46 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.952 00:30:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:27.952 00:30:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:27.952 00:30:46 -- common/autotest_common.sh@10 -- # set +x 00:05:27.952 ************************************ 00:05:27.952 START TEST json_config 00:05:27.952 ************************************ 00:05:27.952 00:30:46 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.214 00:30:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:28.214 00:30:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.215 00:30:46 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.215 00:30:46 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.215 00:30:46 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.215 00:30:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.215 00:30:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.215 00:30:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.215 00:30:46 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.215 00:30:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@47 -- # : 0 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.215 00:30:46 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:28.215 INFO: JSON configuration test init 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:28.215 00:30:46 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:28.215 00:30:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:28.215 00:30:46 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:28.215 00:30:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.215 00:30:46 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:28.215 00:30:46 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.215 00:30:46 json_config -- json_config/common.sh@10 -- # shift 00:05:28.215 00:30:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.215 00:30:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.215 00:30:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.215 00:30:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.215 00:30:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.215 00:30:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=188388 00:05:28.215 00:30:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.215 Waiting for target to run... 00:05:28.215 00:30:46 json_config -- json_config/common.sh@25 -- # waitforlisten 188388 /var/tmp/spdk_tgt.sock 00:05:28.215 00:30:46 json_config -- common/autotest_common.sh@830 -- # '[' -z 188388 ']' 00:05:28.215 00:30:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:28.215 00:30:46 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.215 00:30:46 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:28.215 00:30:46 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.215 00:30:46 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:28.215 00:30:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.215 [2024-06-08 00:30:46.374888] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:28.215 [2024-06-08 00:30:46.374954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188388 ] 00:05:28.215 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.476 [2024-06-08 00:30:46.680422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.476 [2024-06-08 00:30:46.739219] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.047 00:30:47 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:29.047 00:30:47 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:29.047 00:30:47 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.047 00:05:29.047 00:30:47 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:29.047 00:30:47 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:29.047 00:30:47 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:29.047 00:30:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.047 00:30:47 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:29.047 00:30:47 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:29.047 00:30:47 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:29.047 00:30:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.047 00:30:47 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:29.047 00:30:47 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:29.047 00:30:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:29.618 00:30:47 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:29.619 00:30:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:29.619 00:30:47 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:29.619 00:30:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.619 00:30:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:29.619 00:30:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:29.619 00:30:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:29.619 00:30:47 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:29.619 00:30:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:29.619 00:30:47 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:29.619 00:30:47 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:29.619 00:30:47 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:29.619 00:30:47 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:29.619 00:30:47 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:29.619 00:30:47 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:29.619 00:30:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:29.879 00:30:47 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:29.879 00:30:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:29.879 00:30:47 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.879 00:30:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.879 MallocForNvmf0 00:05:29.879 00:30:48 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.879 00:30:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:30.141 MallocForNvmf1 00:05:30.141 00:30:48 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:30.141 00:30:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:30.141 [2024-06-08 00:30:48.383422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.141 00:30:48 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.141 00:30:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.402 00:30:48 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.402 00:30:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.663 00:30:48 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.663 00:30:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.663 00:30:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.663 00:30:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.924 [2024-06-08 00:30:49.029502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:30.924 00:30:49 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:30.924 00:30:49 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:30.924 00:30:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.924 00:30:49 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:30.924 00:30:49 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:30.924 00:30:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.924 00:30:49 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:30.924 00:30:49 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.924 00:30:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.184 MallocBdevForConfigChangeCheck 00:05:31.184 00:30:49 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:31.184 00:30:49 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:31.184 00:30:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.184 00:30:49 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:31.184 00:30:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.444 00:30:49 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:31.444 INFO: shutting down applications... 00:05:31.444 00:30:49 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:31.444 00:30:49 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:31.444 00:30:49 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:31.444 00:30:49 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:32.015 Calling clear_iscsi_subsystem 00:05:32.015 Calling clear_nvmf_subsystem 00:05:32.015 Calling clear_nbd_subsystem 00:05:32.015 Calling clear_ublk_subsystem 00:05:32.015 Calling clear_vhost_blk_subsystem 00:05:32.015 Calling clear_vhost_scsi_subsystem 00:05:32.015 Calling clear_bdev_subsystem 00:05:32.015 00:30:50 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:32.015 00:30:50 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:32.015 00:30:50 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:32.015 00:30:50 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.015 00:30:50 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:32.015 00:30:50 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:32.276 00:30:50 json_config -- json_config/json_config.sh@345 -- # break 00:05:32.276 00:30:50 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:32.276 00:30:50 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:32.276 00:30:50 json_config -- json_config/common.sh@31 -- # local app=target 00:05:32.276 00:30:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:32.276 00:30:50 json_config -- json_config/common.sh@35 -- # [[ -n 188388 ]] 00:05:32.276 00:30:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 188388 00:05:32.276 00:30:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:32.276 00:30:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.276 00:30:50 json_config -- json_config/common.sh@41 -- # kill -0 188388 00:05:32.276 00:30:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.884 00:30:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.884 00:30:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.884 00:30:50 json_config -- json_config/common.sh@41 -- # kill -0 188388 00:05:32.884 00:30:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:32.884 00:30:50 json_config -- json_config/common.sh@43 -- # break 00:05:32.885 00:30:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:32.885 00:30:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:32.885 SPDK target shutdown done 00:05:32.885 00:30:50 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:32.885 INFO: relaunching applications... 00:05:32.885 00:30:50 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.885 00:30:50 json_config -- json_config/common.sh@9 -- # local app=target 00:05:32.885 00:30:50 json_config -- json_config/common.sh@10 -- # shift 00:05:32.885 00:30:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.885 00:30:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.885 00:30:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.885 00:30:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.885 00:30:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.885 00:30:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=189515 00:05:32.885 00:30:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.885 Waiting for target to run... 00:05:32.885 00:30:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.885 00:30:50 json_config -- json_config/common.sh@25 -- # waitforlisten 189515 /var/tmp/spdk_tgt.sock 00:05:32.885 00:30:50 json_config -- common/autotest_common.sh@830 -- # '[' -z 189515 ']' 00:05:32.885 00:30:50 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.885 00:30:50 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:32.885 00:30:50 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.885 00:30:50 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:32.885 00:30:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.885 [2024-06-08 00:30:50.860783] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:32.885 [2024-06-08 00:30:50.860838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid189515 ] 00:05:32.885 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.885 [2024-06-08 00:30:51.062939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.885 [2024-06-08 00:30:51.112242] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.455 [2024-06-08 00:30:51.601447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.455 [2024-06-08 00:30:51.633790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.455 00:30:51 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:33.455 00:30:51 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:33.455 00:30:51 json_config -- json_config/common.sh@26 -- # echo '' 00:05:33.455 00:05:33.455 00:30:51 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:33.455 00:30:51 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:33.455 INFO: Checking if target configuration is the same... 00:05:33.455 00:30:51 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.456 00:30:51 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:33.456 00:30:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.456 + '[' 2 -ne 2 ']' 00:05:33.456 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:33.456 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:33.456 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:33.456 +++ basename /dev/fd/62 00:05:33.456 ++ mktemp /tmp/62.XXX 00:05:33.456 + tmp_file_1=/tmp/62.90i 00:05:33.456 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.456 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.456 + tmp_file_2=/tmp/spdk_tgt_config.json.a0Y 00:05:33.456 + ret=0 00:05:33.456 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.715 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.975 + diff -u /tmp/62.90i /tmp/spdk_tgt_config.json.a0Y 00:05:33.975 + echo 'INFO: JSON config files are the same' 00:05:33.975 INFO: JSON config files are the same 00:05:33.975 + rm /tmp/62.90i /tmp/spdk_tgt_config.json.a0Y 00:05:33.975 + exit 0 00:05:33.975 00:30:52 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:33.975 00:30:52 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:33.975 INFO: changing configuration and checking if this can be detected... 00:05:33.975 00:30:52 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.975 00:30:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.975 00:30:52 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.975 00:30:52 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:33.975 00:30:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.975 + '[' 2 -ne 2 ']' 00:05:33.975 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:33.975 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:33.975 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:33.975 +++ basename /dev/fd/62 00:05:33.975 ++ mktemp /tmp/62.XXX 00:05:33.975 + tmp_file_1=/tmp/62.urX 00:05:33.975 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.975 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.975 + tmp_file_2=/tmp/spdk_tgt_config.json.W9G 00:05:33.975 + ret=0 00:05:33.975 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.235 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.235 + diff -u /tmp/62.urX /tmp/spdk_tgt_config.json.W9G 00:05:34.235 + ret=1 00:05:34.235 + echo '=== Start of file: /tmp/62.urX ===' 00:05:34.235 + cat /tmp/62.urX 00:05:34.235 + echo '=== End of file: /tmp/62.urX ===' 00:05:34.235 + echo '' 00:05:34.235 + echo '=== Start of file: /tmp/spdk_tgt_config.json.W9G ===' 00:05:34.235 + cat /tmp/spdk_tgt_config.json.W9G 00:05:34.235 + echo '=== End of file: /tmp/spdk_tgt_config.json.W9G ===' 00:05:34.235 + echo '' 00:05:34.235 + rm /tmp/62.urX /tmp/spdk_tgt_config.json.W9G 00:05:34.235 + exit 1 00:05:34.235 00:30:52 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:34.235 INFO: configuration change detected. 00:05:34.235 00:30:52 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:34.235 00:30:52 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:34.235 00:30:52 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:34.235 00:30:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@317 -- # [[ -n 189515 ]] 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.495 00:30:52 json_config -- json_config/json_config.sh@323 -- # killprocess 189515 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@949 -- # '[' -z 189515 ']' 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@953 -- # kill -0 189515 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@954 -- # uname 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 189515 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 189515' 00:05:34.495 killing process with pid 189515 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@968 -- # kill 189515 00:05:34.495 00:30:52 json_config -- common/autotest_common.sh@973 -- # wait 189515 00:05:34.755 00:30:52 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.755 00:30:52 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:34.755 00:30:52 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:34.755 00:30:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.755 00:30:52 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:34.755 00:30:52 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:34.755 INFO: Success 00:05:34.755 00:05:34.755 real 0m6.745s 00:05:34.755 user 0m8.263s 00:05:34.755 sys 0m1.611s 00:05:34.755 00:30:52 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:34.755 00:30:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.755 ************************************ 00:05:34.755 END TEST json_config 00:05:34.755 ************************************ 00:05:34.755 00:30:52 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.755 00:30:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:34.755 00:30:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:34.755 00:30:52 -- common/autotest_common.sh@10 -- # set +x 00:05:34.755 ************************************ 00:05:34.755 START TEST json_config_extra_key 00:05:34.755 ************************************ 00:05:34.755 00:30:53 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:35.015 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.015 00:30:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:35.015 00:30:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.015 00:30:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.015 00:30:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:35.016 00:30:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.016 00:30:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.016 00:30:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.016 00:30:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.016 00:30:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.016 00:30:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.016 00:30:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.016 00:30:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:35.016 00:30:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.016 INFO: launching applications... 00:05:35.016 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=189973 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.016 Waiting for target to run... 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 189973 /var/tmp/spdk_tgt.sock 00:05:35.016 00:30:53 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 189973 ']' 00:05:35.016 00:30:53 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.016 00:30:53 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:35.016 00:30:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:35.016 00:30:53 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.016 00:30:53 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:35.016 00:30:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.016 [2024-06-08 00:30:53.172666] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:35.016 [2024-06-08 00:30:53.172716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid189973 ] 00:05:35.016 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.276 [2024-06-08 00:30:53.485698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.276 [2024-06-08 00:30:53.537065] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.846 00:30:53 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:35.847 00:30:53 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:35.847 00:30:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:35.847 00:05:35.847 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:35.847 INFO: shutting down applications... 00:05:35.847 00:30:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:35.847 00:30:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:35.847 00:30:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.847 00:30:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 189973 ]] 00:05:35.847 00:30:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 189973 00:05:35.847 00:30:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.847 00:30:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.847 00:30:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 189973 00:05:35.847 00:30:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.418 00:30:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.418 00:30:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.418 00:30:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 189973 00:05:36.418 00:30:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.418 00:30:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:36.418 00:30:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.418 00:30:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.418 SPDK target shutdown done 00:05:36.418 00:30:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:36.418 Success 00:05:36.418 00:05:36.418 real 0m1.438s 00:05:36.418 user 0m1.062s 00:05:36.418 sys 0m0.403s 00:05:36.418 00:30:54 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:36.418 00:30:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.418 ************************************ 00:05:36.418 END TEST json_config_extra_key 00:05:36.418 ************************************ 00:05:36.418 00:30:54 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.418 00:30:54 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:36.418 00:30:54 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:36.418 00:30:54 -- common/autotest_common.sh@10 -- # set +x 00:05:36.418 ************************************ 00:05:36.418 START TEST alias_rpc 00:05:36.418 ************************************ 00:05:36.418 00:30:54 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.418 * Looking for test storage... 00:05:36.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:36.418 00:30:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.418 00:30:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=190361 00:05:36.418 00:30:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 190361 00:05:36.418 00:30:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.418 00:30:54 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 190361 ']' 00:05:36.418 00:30:54 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.418 00:30:54 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:36.418 00:30:54 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.418 00:30:54 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:36.418 00:30:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.418 [2024-06-08 00:30:54.666787] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:36.419 [2024-06-08 00:30:54.666847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid190361 ] 00:05:36.419 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.679 [2024-06-08 00:30:54.732289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.679 [2024-06-08 00:30:54.805526] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.252 00:30:55 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:37.252 00:30:55 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:37.252 00:30:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:37.513 00:30:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 190361 00:05:37.513 00:30:55 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 190361 ']' 00:05:37.513 00:30:55 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 190361 00:05:37.513 00:30:55 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:37.513 00:30:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:37.513 00:30:55 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 190361 00:05:37.513 00:30:55 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:37.513 00:30:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:37.513 00:30:55 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 190361' 00:05:37.513 killing process with pid 190361 00:05:37.513 00:30:55 alias_rpc -- common/autotest_common.sh@968 -- # kill 190361 00:05:37.513 00:30:55 alias_rpc -- common/autotest_common.sh@973 -- # wait 190361 00:05:37.775 00:05:37.775 real 0m1.360s 00:05:37.775 user 0m1.498s 00:05:37.775 sys 0m0.380s 00:05:37.775 00:30:55 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:37.775 00:30:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.775 ************************************ 00:05:37.775 END TEST alias_rpc 00:05:37.775 ************************************ 00:05:37.775 00:30:55 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:37.775 00:30:55 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.775 00:30:55 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:37.775 00:30:55 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:37.775 00:30:55 -- common/autotest_common.sh@10 -- # set +x 00:05:37.775 ************************************ 00:05:37.775 START TEST spdkcli_tcp 00:05:37.775 ************************************ 00:05:37.775 00:30:55 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.775 * Looking for test storage... 00:05:37.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:37.775 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:38.035 00:30:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:38.035 00:30:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:38.035 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:38.035 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:38.035 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:38.035 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:38.035 00:30:56 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:38.035 00:30:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.035 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=190744 00:05:38.035 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 190744 00:05:38.035 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:38.035 00:30:56 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 190744 ']' 00:05:38.035 00:30:56 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.035 00:30:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:38.035 00:30:56 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.035 00:30:56 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:38.035 00:30:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.035 [2024-06-08 00:30:56.118704] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:38.035 [2024-06-08 00:30:56.118766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid190744 ] 00:05:38.035 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.035 [2024-06-08 00:30:56.181505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.035 [2024-06-08 00:30:56.254387] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.035 [2024-06-08 00:30:56.254389] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.977 00:30:56 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:38.977 00:30:56 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:38.977 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=190770 00:05:38.977 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:38.977 00:30:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:38.977 [ 00:05:38.977 "bdev_malloc_delete", 00:05:38.977 "bdev_malloc_create", 00:05:38.977 "bdev_null_resize", 00:05:38.977 "bdev_null_delete", 00:05:38.977 "bdev_null_create", 00:05:38.977 "bdev_nvme_cuse_unregister", 00:05:38.977 "bdev_nvme_cuse_register", 00:05:38.977 "bdev_opal_new_user", 00:05:38.977 "bdev_opal_set_lock_state", 00:05:38.977 "bdev_opal_delete", 00:05:38.977 "bdev_opal_get_info", 00:05:38.977 "bdev_opal_create", 00:05:38.977 "bdev_nvme_opal_revert", 00:05:38.977 "bdev_nvme_opal_init", 00:05:38.977 "bdev_nvme_send_cmd", 00:05:38.977 "bdev_nvme_get_path_iostat", 00:05:38.977 "bdev_nvme_get_mdns_discovery_info", 00:05:38.977 "bdev_nvme_stop_mdns_discovery", 00:05:38.977 "bdev_nvme_start_mdns_discovery", 00:05:38.977 "bdev_nvme_set_multipath_policy", 00:05:38.977 "bdev_nvme_set_preferred_path", 00:05:38.977 "bdev_nvme_get_io_paths", 00:05:38.977 "bdev_nvme_remove_error_injection", 00:05:38.977 "bdev_nvme_add_error_injection", 00:05:38.977 "bdev_nvme_get_discovery_info", 00:05:38.977 "bdev_nvme_stop_discovery", 00:05:38.977 "bdev_nvme_start_discovery", 00:05:38.977 "bdev_nvme_get_controller_health_info", 00:05:38.977 "bdev_nvme_disable_controller", 00:05:38.977 "bdev_nvme_enable_controller", 00:05:38.977 "bdev_nvme_reset_controller", 00:05:38.977 "bdev_nvme_get_transport_statistics", 00:05:38.977 "bdev_nvme_apply_firmware", 00:05:38.977 "bdev_nvme_detach_controller", 00:05:38.977 "bdev_nvme_get_controllers", 00:05:38.977 "bdev_nvme_attach_controller", 00:05:38.977 "bdev_nvme_set_hotplug", 00:05:38.977 "bdev_nvme_set_options", 00:05:38.977 "bdev_passthru_delete", 00:05:38.977 "bdev_passthru_create", 00:05:38.977 "bdev_lvol_set_parent_bdev", 00:05:38.977 "bdev_lvol_set_parent", 00:05:38.977 "bdev_lvol_check_shallow_copy", 00:05:38.977 "bdev_lvol_start_shallow_copy", 00:05:38.977 "bdev_lvol_grow_lvstore", 00:05:38.977 "bdev_lvol_get_lvols", 00:05:38.977 "bdev_lvol_get_lvstores", 00:05:38.977 "bdev_lvol_delete", 00:05:38.977 "bdev_lvol_set_read_only", 00:05:38.977 "bdev_lvol_resize", 00:05:38.977 "bdev_lvol_decouple_parent", 00:05:38.977 "bdev_lvol_inflate", 00:05:38.977 "bdev_lvol_rename", 00:05:38.977 "bdev_lvol_clone_bdev", 00:05:38.977 "bdev_lvol_clone", 00:05:38.977 "bdev_lvol_snapshot", 00:05:38.977 "bdev_lvol_create", 00:05:38.977 "bdev_lvol_delete_lvstore", 00:05:38.977 "bdev_lvol_rename_lvstore", 00:05:38.977 "bdev_lvol_create_lvstore", 00:05:38.977 "bdev_raid_set_options", 00:05:38.977 "bdev_raid_remove_base_bdev", 00:05:38.977 "bdev_raid_add_base_bdev", 00:05:38.977 "bdev_raid_delete", 00:05:38.977 "bdev_raid_create", 00:05:38.977 "bdev_raid_get_bdevs", 00:05:38.977 "bdev_error_inject_error", 00:05:38.977 "bdev_error_delete", 00:05:38.977 "bdev_error_create", 00:05:38.977 "bdev_split_delete", 00:05:38.977 "bdev_split_create", 00:05:38.977 "bdev_delay_delete", 00:05:38.977 "bdev_delay_create", 00:05:38.977 "bdev_delay_update_latency", 00:05:38.977 "bdev_zone_block_delete", 00:05:38.977 "bdev_zone_block_create", 00:05:38.977 "blobfs_create", 00:05:38.977 "blobfs_detect", 00:05:38.977 "blobfs_set_cache_size", 00:05:38.977 "bdev_aio_delete", 00:05:38.977 "bdev_aio_rescan", 00:05:38.977 "bdev_aio_create", 00:05:38.977 "bdev_ftl_set_property", 00:05:38.977 "bdev_ftl_get_properties", 00:05:38.977 "bdev_ftl_get_stats", 00:05:38.977 "bdev_ftl_unmap", 00:05:38.977 "bdev_ftl_unload", 00:05:38.977 "bdev_ftl_delete", 00:05:38.977 "bdev_ftl_load", 00:05:38.977 "bdev_ftl_create", 00:05:38.977 "bdev_virtio_attach_controller", 00:05:38.977 "bdev_virtio_scsi_get_devices", 00:05:38.977 "bdev_virtio_detach_controller", 00:05:38.977 "bdev_virtio_blk_set_hotplug", 00:05:38.977 "bdev_iscsi_delete", 00:05:38.977 "bdev_iscsi_create", 00:05:38.977 "bdev_iscsi_set_options", 00:05:38.977 "accel_error_inject_error", 00:05:38.977 "ioat_scan_accel_module", 00:05:38.977 "dsa_scan_accel_module", 00:05:38.977 "iaa_scan_accel_module", 00:05:38.977 "keyring_file_remove_key", 00:05:38.977 "keyring_file_add_key", 00:05:38.977 "keyring_linux_set_options", 00:05:38.977 "iscsi_get_histogram", 00:05:38.977 "iscsi_enable_histogram", 00:05:38.977 "iscsi_set_options", 00:05:38.977 "iscsi_get_auth_groups", 00:05:38.977 "iscsi_auth_group_remove_secret", 00:05:38.977 "iscsi_auth_group_add_secret", 00:05:38.977 "iscsi_delete_auth_group", 00:05:38.977 "iscsi_create_auth_group", 00:05:38.977 "iscsi_set_discovery_auth", 00:05:38.977 "iscsi_get_options", 00:05:38.977 "iscsi_target_node_request_logout", 00:05:38.977 "iscsi_target_node_set_redirect", 00:05:38.977 "iscsi_target_node_set_auth", 00:05:38.977 "iscsi_target_node_add_lun", 00:05:38.977 "iscsi_get_stats", 00:05:38.977 "iscsi_get_connections", 00:05:38.977 "iscsi_portal_group_set_auth", 00:05:38.977 "iscsi_start_portal_group", 00:05:38.977 "iscsi_delete_portal_group", 00:05:38.977 "iscsi_create_portal_group", 00:05:38.977 "iscsi_get_portal_groups", 00:05:38.977 "iscsi_delete_target_node", 00:05:38.977 "iscsi_target_node_remove_pg_ig_maps", 00:05:38.977 "iscsi_target_node_add_pg_ig_maps", 00:05:38.977 "iscsi_create_target_node", 00:05:38.977 "iscsi_get_target_nodes", 00:05:38.977 "iscsi_delete_initiator_group", 00:05:38.977 "iscsi_initiator_group_remove_initiators", 00:05:38.977 "iscsi_initiator_group_add_initiators", 00:05:38.977 "iscsi_create_initiator_group", 00:05:38.977 "iscsi_get_initiator_groups", 00:05:38.977 "nvmf_set_crdt", 00:05:38.977 "nvmf_set_config", 00:05:38.977 "nvmf_set_max_subsystems", 00:05:38.977 "nvmf_stop_mdns_prr", 00:05:38.977 "nvmf_publish_mdns_prr", 00:05:38.977 "nvmf_subsystem_get_listeners", 00:05:38.977 "nvmf_subsystem_get_qpairs", 00:05:38.977 "nvmf_subsystem_get_controllers", 00:05:38.977 "nvmf_get_stats", 00:05:38.977 "nvmf_get_transports", 00:05:38.977 "nvmf_create_transport", 00:05:38.977 "nvmf_get_targets", 00:05:38.977 "nvmf_delete_target", 00:05:38.977 "nvmf_create_target", 00:05:38.977 "nvmf_subsystem_allow_any_host", 00:05:38.977 "nvmf_subsystem_remove_host", 00:05:38.977 "nvmf_subsystem_add_host", 00:05:38.977 "nvmf_ns_remove_host", 00:05:38.977 "nvmf_ns_add_host", 00:05:38.977 "nvmf_subsystem_remove_ns", 00:05:38.977 "nvmf_subsystem_add_ns", 00:05:38.977 "nvmf_subsystem_listener_set_ana_state", 00:05:38.977 "nvmf_discovery_get_referrals", 00:05:38.977 "nvmf_discovery_remove_referral", 00:05:38.977 "nvmf_discovery_add_referral", 00:05:38.977 "nvmf_subsystem_remove_listener", 00:05:38.977 "nvmf_subsystem_add_listener", 00:05:38.977 "nvmf_delete_subsystem", 00:05:38.977 "nvmf_create_subsystem", 00:05:38.977 "nvmf_get_subsystems", 00:05:38.977 "env_dpdk_get_mem_stats", 00:05:38.977 "nbd_get_disks", 00:05:38.977 "nbd_stop_disk", 00:05:38.977 "nbd_start_disk", 00:05:38.977 "ublk_recover_disk", 00:05:38.977 "ublk_get_disks", 00:05:38.977 "ublk_stop_disk", 00:05:38.977 "ublk_start_disk", 00:05:38.977 "ublk_destroy_target", 00:05:38.977 "ublk_create_target", 00:05:38.977 "virtio_blk_create_transport", 00:05:38.977 "virtio_blk_get_transports", 00:05:38.977 "vhost_controller_set_coalescing", 00:05:38.977 "vhost_get_controllers", 00:05:38.977 "vhost_delete_controller", 00:05:38.977 "vhost_create_blk_controller", 00:05:38.977 "vhost_scsi_controller_remove_target", 00:05:38.977 "vhost_scsi_controller_add_target", 00:05:38.977 "vhost_start_scsi_controller", 00:05:38.977 "vhost_create_scsi_controller", 00:05:38.977 "thread_set_cpumask", 00:05:38.977 "framework_get_scheduler", 00:05:38.977 "framework_set_scheduler", 00:05:38.977 "framework_get_reactors", 00:05:38.977 "thread_get_io_channels", 00:05:38.977 "thread_get_pollers", 00:05:38.977 "thread_get_stats", 00:05:38.977 "framework_monitor_context_switch", 00:05:38.977 "spdk_kill_instance", 00:05:38.977 "log_enable_timestamps", 00:05:38.977 "log_get_flags", 00:05:38.977 "log_clear_flag", 00:05:38.977 "log_set_flag", 00:05:38.977 "log_get_level", 00:05:38.977 "log_set_level", 00:05:38.977 "log_get_print_level", 00:05:38.977 "log_set_print_level", 00:05:38.977 "framework_enable_cpumask_locks", 00:05:38.977 "framework_disable_cpumask_locks", 00:05:38.977 "framework_wait_init", 00:05:38.977 "framework_start_init", 00:05:38.977 "scsi_get_devices", 00:05:38.977 "bdev_get_histogram", 00:05:38.977 "bdev_enable_histogram", 00:05:38.978 "bdev_set_qos_limit", 00:05:38.978 "bdev_set_qd_sampling_period", 00:05:38.978 "bdev_get_bdevs", 00:05:38.978 "bdev_reset_iostat", 00:05:38.978 "bdev_get_iostat", 00:05:38.978 "bdev_examine", 00:05:38.978 "bdev_wait_for_examine", 00:05:38.978 "bdev_set_options", 00:05:38.978 "notify_get_notifications", 00:05:38.978 "notify_get_types", 00:05:38.978 "accel_get_stats", 00:05:38.978 "accel_set_options", 00:05:38.978 "accel_set_driver", 00:05:38.978 "accel_crypto_key_destroy", 00:05:38.978 "accel_crypto_keys_get", 00:05:38.978 "accel_crypto_key_create", 00:05:38.978 "accel_assign_opc", 00:05:38.978 "accel_get_module_info", 00:05:38.978 "accel_get_opc_assignments", 00:05:38.978 "vmd_rescan", 00:05:38.978 "vmd_remove_device", 00:05:38.978 "vmd_enable", 00:05:38.978 "sock_get_default_impl", 00:05:38.978 "sock_set_default_impl", 00:05:38.978 "sock_impl_set_options", 00:05:38.978 "sock_impl_get_options", 00:05:38.978 "iobuf_get_stats", 00:05:38.978 "iobuf_set_options", 00:05:38.978 "framework_get_pci_devices", 00:05:38.978 "framework_get_config", 00:05:38.978 "framework_get_subsystems", 00:05:38.978 "trace_get_info", 00:05:38.978 "trace_get_tpoint_group_mask", 00:05:38.978 "trace_disable_tpoint_group", 00:05:38.978 "trace_enable_tpoint_group", 00:05:38.978 "trace_clear_tpoint_mask", 00:05:38.978 "trace_set_tpoint_mask", 00:05:38.978 "keyring_get_keys", 00:05:38.978 "spdk_get_version", 00:05:38.978 "rpc_get_methods" 00:05:38.978 ] 00:05:38.978 00:30:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.978 00:30:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:38.978 00:30:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 190744 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 190744 ']' 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 190744 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 190744 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 190744' 00:05:38.978 killing process with pid 190744 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 190744 00:05:38.978 00:30:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 190744 00:05:39.238 00:05:39.238 real 0m1.414s 00:05:39.238 user 0m2.624s 00:05:39.238 sys 0m0.420s 00:05:39.238 00:30:57 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:39.238 00:30:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.238 ************************************ 00:05:39.238 END TEST spdkcli_tcp 00:05:39.238 ************************************ 00:05:39.238 00:30:57 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.238 00:30:57 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:39.238 00:30:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:39.238 00:30:57 -- common/autotest_common.sh@10 -- # set +x 00:05:39.238 ************************************ 00:05:39.238 START TEST dpdk_mem_utility 00:05:39.238 ************************************ 00:05:39.238 00:30:57 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.498 * Looking for test storage... 00:05:39.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:39.498 00:30:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.498 00:30:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=191148 00:05:39.498 00:30:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 191148 00:05:39.498 00:30:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.498 00:30:57 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 191148 ']' 00:05:39.498 00:30:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.498 00:30:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:39.498 00:30:57 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.498 00:30:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:39.498 00:30:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.498 [2024-06-08 00:30:57.609786] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:39.498 [2024-06-08 00:30:57.609870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191148 ] 00:05:39.498 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.498 [2024-06-08 00:30:57.674302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.498 [2024-06-08 00:30:57.749892] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.439 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:40.439 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:40.439 00:30:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:40.439 00:30:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:40.439 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:40.439 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.439 { 00:05:40.439 "filename": "/tmp/spdk_mem_dump.txt" 00:05:40.439 } 00:05:40.439 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:40.439 00:30:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:40.439 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:40.439 1 heaps totaling size 814.000000 MiB 00:05:40.439 size: 814.000000 MiB heap id: 0 00:05:40.439 end heaps---------- 00:05:40.439 8 mempools totaling size 598.116089 MiB 00:05:40.439 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:40.439 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:40.439 size: 84.521057 MiB name: bdev_io_191148 00:05:40.439 size: 51.011292 MiB name: evtpool_191148 00:05:40.439 size: 50.003479 MiB name: msgpool_191148 00:05:40.439 size: 21.763794 MiB name: PDU_Pool 00:05:40.439 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:40.439 size: 0.026123 MiB name: Session_Pool 00:05:40.439 end mempools------- 00:05:40.439 6 memzones totaling size 4.142822 MiB 00:05:40.439 size: 1.000366 MiB name: RG_ring_0_191148 00:05:40.439 size: 1.000366 MiB name: RG_ring_1_191148 00:05:40.439 size: 1.000366 MiB name: RG_ring_4_191148 00:05:40.439 size: 1.000366 MiB name: RG_ring_5_191148 00:05:40.439 size: 0.125366 MiB name: RG_ring_2_191148 00:05:40.439 size: 0.015991 MiB name: RG_ring_3_191148 00:05:40.439 end memzones------- 00:05:40.439 00:30:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:40.439 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:40.439 list of free elements. size: 12.519348 MiB 00:05:40.439 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:40.439 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:40.439 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:40.439 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:40.439 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:40.439 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:40.439 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:40.439 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:40.439 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:40.439 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:40.439 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:40.439 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:40.439 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:40.439 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:40.440 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:40.440 list of standard malloc elements. size: 199.218079 MiB 00:05:40.440 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:40.440 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:40.440 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:40.440 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:40.440 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:40.440 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:40.440 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:40.440 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:40.440 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:40.440 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:40.440 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:40.440 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:40.440 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:40.440 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:40.440 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:40.440 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:40.440 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:40.440 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:40.440 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:40.440 list of memzone associated elements. size: 602.262573 MiB 00:05:40.440 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:40.440 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:40.440 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:40.440 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:40.440 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:40.440 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_191148_0 00:05:40.440 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:40.440 associated memzone info: size: 48.002930 MiB name: MP_evtpool_191148_0 00:05:40.440 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:40.440 associated memzone info: size: 48.002930 MiB name: MP_msgpool_191148_0 00:05:40.440 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:40.440 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:40.440 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:40.440 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:40.440 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:40.440 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_191148 00:05:40.440 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:40.440 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_191148 00:05:40.440 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:40.440 associated memzone info: size: 1.007996 MiB name: MP_evtpool_191148 00:05:40.440 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:40.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:40.440 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:40.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:40.440 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:40.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:40.440 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:40.440 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:40.440 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:40.440 associated memzone info: size: 1.000366 MiB name: RG_ring_0_191148 00:05:40.440 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:40.440 associated memzone info: size: 1.000366 MiB name: RG_ring_1_191148 00:05:40.440 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:40.440 associated memzone info: size: 1.000366 MiB name: RG_ring_4_191148 00:05:40.440 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:40.440 associated memzone info: size: 1.000366 MiB name: RG_ring_5_191148 00:05:40.440 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:40.440 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_191148 00:05:40.440 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:40.440 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:40.440 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:40.440 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:40.440 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:40.440 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:40.440 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:40.440 associated memzone info: size: 0.125366 MiB name: RG_ring_2_191148 00:05:40.440 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:40.440 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:40.440 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:40.440 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:40.440 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:40.440 associated memzone info: size: 0.015991 MiB name: RG_ring_3_191148 00:05:40.440 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:40.440 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:40.440 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:40.440 associated memzone info: size: 0.000183 MiB name: MP_msgpool_191148 00:05:40.440 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:40.440 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_191148 00:05:40.440 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:40.440 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:40.440 00:30:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:40.440 00:30:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 191148 00:05:40.440 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 191148 ']' 00:05:40.440 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 191148 00:05:40.440 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:40.440 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:40.440 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 191148 00:05:40.440 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:40.440 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:40.440 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 191148' 00:05:40.440 killing process with pid 191148 00:05:40.440 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 191148 00:05:40.440 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 191148 00:05:40.701 00:05:40.701 real 0m1.296s 00:05:40.701 user 0m1.395s 00:05:40.701 sys 0m0.359s 00:05:40.701 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:40.701 00:30:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.701 ************************************ 00:05:40.701 END TEST dpdk_mem_utility 00:05:40.701 ************************************ 00:05:40.701 00:30:58 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:40.701 00:30:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:40.701 00:30:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:40.701 00:30:58 -- common/autotest_common.sh@10 -- # set +x 00:05:40.701 ************************************ 00:05:40.701 START TEST event 00:05:40.701 ************************************ 00:05:40.701 00:30:58 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:40.701 * Looking for test storage... 00:05:40.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:40.701 00:30:58 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:40.701 00:30:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:40.701 00:30:58 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.701 00:30:58 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:40.701 00:30:58 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:40.701 00:30:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.701 ************************************ 00:05:40.701 START TEST event_perf 00:05:40.701 ************************************ 00:05:40.701 00:30:58 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.701 Running I/O for 1 seconds...[2024-06-08 00:30:58.969639] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:40.701 [2024-06-08 00:30:58.969731] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191398 ] 00:05:40.961 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.961 [2024-06-08 00:30:59.035071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.961 [2024-06-08 00:30:59.103182] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.961 [2024-06-08 00:30:59.103199] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.961 [2024-06-08 00:30:59.103333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.961 [2024-06-08 00:30:59.103334] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.902 Running I/O for 1 seconds... 00:05:41.902 lcore 0: 176386 00:05:41.902 lcore 1: 176386 00:05:41.902 lcore 2: 176383 00:05:41.903 lcore 3: 176386 00:05:41.903 done. 00:05:41.903 00:05:41.903 real 0m1.210s 00:05:41.903 user 0m4.131s 00:05:41.903 sys 0m0.076s 00:05:41.903 00:31:00 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:41.903 00:31:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.903 ************************************ 00:05:41.903 END TEST event_perf 00:05:41.903 ************************************ 00:05:42.164 00:31:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:42.164 00:31:00 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:42.164 00:31:00 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:42.164 00:31:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.164 ************************************ 00:05:42.164 START TEST event_reactor 00:05:42.164 ************************************ 00:05:42.164 00:31:00 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:42.164 [2024-06-08 00:31:00.255810] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:42.164 [2024-06-08 00:31:00.255908] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191589 ] 00:05:42.164 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.164 [2024-06-08 00:31:00.330490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.164 [2024-06-08 00:31:00.397531] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.547 test_start 00:05:43.548 oneshot 00:05:43.548 tick 100 00:05:43.548 tick 100 00:05:43.548 tick 250 00:05:43.548 tick 100 00:05:43.548 tick 100 00:05:43.548 tick 100 00:05:43.548 tick 250 00:05:43.548 tick 500 00:05:43.548 tick 100 00:05:43.548 tick 100 00:05:43.548 tick 250 00:05:43.548 tick 100 00:05:43.548 tick 100 00:05:43.548 test_end 00:05:43.548 00:05:43.548 real 0m1.215s 00:05:43.548 user 0m1.134s 00:05:43.548 sys 0m0.077s 00:05:43.548 00:31:01 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:43.548 00:31:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:43.548 ************************************ 00:05:43.548 END TEST event_reactor 00:05:43.548 ************************************ 00:05:43.548 00:31:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.548 00:31:01 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:43.548 00:31:01 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.548 00:31:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.548 ************************************ 00:05:43.548 START TEST event_reactor_perf 00:05:43.548 ************************************ 00:05:43.548 00:31:01 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.548 [2024-06-08 00:31:01.549555] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:43.548 [2024-06-08 00:31:01.549649] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191935 ] 00:05:43.548 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.548 [2024-06-08 00:31:01.614595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.548 [2024-06-08 00:31:01.684692] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.490 test_start 00:05:44.490 test_end 00:05:44.490 Performance: 371432 events per second 00:05:44.490 00:05:44.490 real 0m1.210s 00:05:44.490 user 0m1.132s 00:05:44.490 sys 0m0.073s 00:05:44.490 00:31:02 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.490 00:31:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.490 ************************************ 00:05:44.490 END TEST event_reactor_perf 00:05:44.490 ************************************ 00:05:44.751 00:31:02 event -- event/event.sh@49 -- # uname -s 00:05:44.751 00:31:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:44.751 00:31:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.751 00:31:02 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:44.751 00:31:02 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.751 00:31:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.751 ************************************ 00:05:44.751 START TEST event_scheduler 00:05:44.751 ************************************ 00:05:44.751 00:31:02 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.751 * Looking for test storage... 00:05:44.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:44.751 00:31:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:44.751 00:31:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=192319 00:05:44.751 00:31:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.751 00:31:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:44.751 00:31:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 192319 00:05:44.751 00:31:02 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 192319 ']' 00:05:44.751 00:31:02 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.751 00:31:02 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:44.751 00:31:02 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.751 00:31:02 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:44.751 00:31:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.751 [2024-06-08 00:31:02.969070] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:44.751 [2024-06-08 00:31:02.969141] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192319 ] 00:05:44.751 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.751 [2024-06-08 00:31:03.023440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.011 [2024-06-08 00:31:03.090011] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.011 [2024-06-08 00:31:03.090170] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.011 [2024-06-08 00:31:03.090325] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.011 [2024-06-08 00:31:03.090327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.581 00:31:03 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:45.581 00:31:03 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:45.581 00:31:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:45.581 00:31:03 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.581 00:31:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.581 POWER: Env isn't set yet! 00:05:45.581 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:45.581 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:45.581 POWER: Cannot set governor of lcore 0 to userspace 00:05:45.581 POWER: Attempting to initialise PSTAT power management... 00:05:45.581 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:45.581 POWER: Initialized successfully for lcore 0 power management 00:05:45.581 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:45.581 POWER: Initialized successfully for lcore 1 power management 00:05:45.581 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:45.581 POWER: Initialized successfully for lcore 2 power management 00:05:45.581 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:45.582 POWER: Initialized successfully for lcore 3 power management 00:05:45.582 [2024-06-08 00:31:03.802611] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:45.582 [2024-06-08 00:31:03.802623] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:45.582 [2024-06-08 00:31:03.802629] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:45.582 00:31:03 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.582 00:31:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:45.582 00:31:03 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.582 00:31:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.582 [2024-06-08 00:31:03.859942] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.582 00:31:03 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.582 00:31:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.582 00:31:03 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:45.582 00:31:03 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:45.582 00:31:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.843 ************************************ 00:05:45.843 START TEST scheduler_create_thread 00:05:45.843 ************************************ 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.843 2 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.843 3 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.843 4 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.843 5 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.843 6 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.843 7 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.843 8 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.843 00:31:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.843 00:31:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.415 9 00:05:46.415 00:31:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:46.415 00:31:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:46.415 00:31:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:46.415 00:31:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.357 10 00:05:47.357 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:47.357 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:47.357 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:47.357 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.328 00:31:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.328 00:31:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:48.328 00:31:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:48.328 00:31:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.328 00:31:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.910 00:31:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.910 00:31:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:48.910 00:31:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.910 00:31:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.851 00:31:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.851 00:31:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:49.851 00:31:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:49.851 00:31:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.851 00:31:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.112 00:31:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:50.112 00:05:50.112 real 0m4.464s 00:05:50.112 user 0m0.026s 00:05:50.112 sys 0m0.005s 00:05:50.112 00:31:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.112 00:31:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.112 ************************************ 00:05:50.112 END TEST scheduler_create_thread 00:05:50.112 ************************************ 00:05:50.373 00:31:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:50.373 00:31:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 192319 00:05:50.373 00:31:08 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 192319 ']' 00:05:50.373 00:31:08 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 192319 00:05:50.373 00:31:08 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:50.373 00:31:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:50.373 00:31:08 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 192319 00:05:50.373 00:31:08 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:50.373 00:31:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:50.373 00:31:08 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 192319' 00:05:50.373 killing process with pid 192319 00:05:50.373 00:31:08 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 192319 00:05:50.373 00:31:08 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 192319 00:05:50.373 [2024-06-08 00:31:08.642755] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.634 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:50.634 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:50.634 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:50.634 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:50.634 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:50.634 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:50.634 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:50.634 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:50.634 00:05:50.634 real 0m5.983s 00:05:50.634 user 0m14.349s 00:05:50.634 sys 0m0.352s 00:05:50.634 00:31:08 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.634 00:31:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.634 ************************************ 00:05:50.634 END TEST event_scheduler 00:05:50.634 ************************************ 00:05:50.634 00:31:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.634 00:31:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.634 00:31:08 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:50.634 00:31:08 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.634 00:31:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.634 ************************************ 00:05:50.634 START TEST app_repeat 00:05:50.634 ************************************ 00:05:50.634 00:31:08 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=193453 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 193453' 00:05:50.634 Process app_repeat pid: 193453 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.634 spdk_app_start Round 0 00:05:50.634 00:31:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 193453 /var/tmp/spdk-nbd.sock 00:05:50.634 00:31:08 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 193453 ']' 00:05:50.634 00:31:08 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.634 00:31:08 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:50.634 00:31:08 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.634 00:31:08 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:50.634 00:31:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.895 [2024-06-08 00:31:08.927171] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:50.895 [2024-06-08 00:31:08.927270] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193453 ] 00:05:50.895 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.895 [2024-06-08 00:31:08.996734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.895 [2024-06-08 00:31:09.072957] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.895 [2024-06-08 00:31:09.072960] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.467 00:31:09 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:51.467 00:31:09 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:51.467 00:31:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.727 Malloc0 00:05:51.727 00:31:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.988 Malloc1 00:05:51.988 00:31:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.988 /dev/nbd0 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.988 00:31:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.988 00:31:10 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:51.988 00:31:10 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:51.988 00:31:10 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:51.988 00:31:10 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:51.988 00:31:10 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:51.988 00:31:10 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:51.988 00:31:10 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:51.988 00:31:10 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:51.988 00:31:10 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.988 1+0 records in 00:05:51.989 1+0 records out 00:05:51.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212658 s, 19.3 MB/s 00:05:51.989 00:31:10 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.989 00:31:10 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:51.989 00:31:10 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.989 00:31:10 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:51.989 00:31:10 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:51.989 00:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.989 00:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.989 00:31:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.250 /dev/nbd1 00:05:52.250 00:31:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.250 00:31:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.250 1+0 records in 00:05:52.250 1+0 records out 00:05:52.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288827 s, 14.2 MB/s 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:52.250 00:31:10 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:52.250 00:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.250 00:31:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.250 00:31:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.250 00:31:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.250 00:31:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.512 { 00:05:52.512 "nbd_device": "/dev/nbd0", 00:05:52.512 "bdev_name": "Malloc0" 00:05:52.512 }, 00:05:52.512 { 00:05:52.512 "nbd_device": "/dev/nbd1", 00:05:52.512 "bdev_name": "Malloc1" 00:05:52.512 } 00:05:52.512 ]' 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.512 { 00:05:52.512 "nbd_device": "/dev/nbd0", 00:05:52.512 "bdev_name": "Malloc0" 00:05:52.512 }, 00:05:52.512 { 00:05:52.512 "nbd_device": "/dev/nbd1", 00:05:52.512 "bdev_name": "Malloc1" 00:05:52.512 } 00:05:52.512 ]' 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.512 /dev/nbd1' 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.512 /dev/nbd1' 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.512 256+0 records in 00:05:52.512 256+0 records out 00:05:52.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121909 s, 86.0 MB/s 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.512 256+0 records in 00:05:52.512 256+0 records out 00:05:52.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171615 s, 61.1 MB/s 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.512 256+0 records in 00:05:52.512 256+0 records out 00:05:52.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178884 s, 58.6 MB/s 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.512 00:31:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.774 00:31:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.774 00:31:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.774 00:31:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.774 00:31:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.774 00:31:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.774 00:31:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.774 00:31:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.774 00:31:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.774 00:31:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.774 00:31:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.035 00:31:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.035 00:31:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.035 00:31:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.035 00:31:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.035 00:31:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.035 00:31:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.035 00:31:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.035 00:31:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.036 00:31:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.036 00:31:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.297 00:31:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.557 [2024-06-08 00:31:11.583933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.557 [2024-06-08 00:31:11.648274] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.557 [2024-06-08 00:31:11.648277] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.557 [2024-06-08 00:31:11.679608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.557 [2024-06-08 00:31:11.679642] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.860 00:31:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.860 00:31:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:56.860 spdk_app_start Round 1 00:05:56.860 00:31:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 193453 /var/tmp/spdk-nbd.sock 00:05:56.860 00:31:14 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 193453 ']' 00:05:56.860 00:31:14 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.860 00:31:14 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:56.860 00:31:14 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.860 00:31:14 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:56.860 00:31:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.860 00:31:14 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:56.860 00:31:14 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:56.860 00:31:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.860 Malloc0 00:05:56.860 00:31:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.860 Malloc1 00:05:56.860 00:31:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.860 00:31:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.860 /dev/nbd0 00:05:56.860 00:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.860 00:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.860 00:31:15 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:56.860 00:31:15 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:56.860 00:31:15 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.861 1+0 records in 00:05:56.861 1+0 records out 00:05:56.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215595 s, 19.0 MB/s 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:56.861 00:31:15 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:56.861 00:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.861 00:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.861 00:31:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.122 /dev/nbd1 00:05:57.122 00:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.122 00:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.122 1+0 records in 00:05:57.122 1+0 records out 00:05:57.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000135653 s, 30.2 MB/s 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:57.122 00:31:15 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:57.122 00:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.122 00:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.122 00:31:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.122 00:31:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.122 00:31:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.384 { 00:05:57.384 "nbd_device": "/dev/nbd0", 00:05:57.384 "bdev_name": "Malloc0" 00:05:57.384 }, 00:05:57.384 { 00:05:57.384 "nbd_device": "/dev/nbd1", 00:05:57.384 "bdev_name": "Malloc1" 00:05:57.384 } 00:05:57.384 ]' 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.384 { 00:05:57.384 "nbd_device": "/dev/nbd0", 00:05:57.384 "bdev_name": "Malloc0" 00:05:57.384 }, 00:05:57.384 { 00:05:57.384 "nbd_device": "/dev/nbd1", 00:05:57.384 "bdev_name": "Malloc1" 00:05:57.384 } 00:05:57.384 ]' 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.384 /dev/nbd1' 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.384 /dev/nbd1' 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.384 256+0 records in 00:05:57.384 256+0 records out 00:05:57.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116375 s, 90.1 MB/s 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.384 256+0 records in 00:05:57.384 256+0 records out 00:05:57.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158139 s, 66.3 MB/s 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.384 256+0 records in 00:05:57.384 256+0 records out 00:05:57.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168164 s, 62.4 MB/s 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.384 00:31:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.645 00:31:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.645 00:31:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.645 00:31:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.645 00:31:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.646 00:31:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.907 00:31:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.907 00:31:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.167 00:31:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.167 [2024-06-08 00:31:16.409316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.428 [2024-06-08 00:31:16.472825] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.428 [2024-06-08 00:31:16.472829] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.428 [2024-06-08 00:31:16.505015] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.428 [2024-06-08 00:31:16.505051] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.729 00:31:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.729 00:31:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:01.729 spdk_app_start Round 2 00:06:01.729 00:31:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 193453 /var/tmp/spdk-nbd.sock 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 193453 ']' 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:01.729 00:31:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.729 Malloc0 00:06:01.729 00:31:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.729 Malloc1 00:06:01.729 00:31:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.729 /dev/nbd0 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.729 00:31:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:01.729 00:31:19 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.730 1+0 records in 00:06:01.730 1+0 records out 00:06:01.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277095 s, 14.8 MB/s 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:01.730 00:31:19 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:01.730 00:31:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.730 00:31:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.730 00:31:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.990 /dev/nbd1 00:06:01.990 00:31:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.990 00:31:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.990 1+0 records in 00:06:01.990 1+0 records out 00:06:01.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283868 s, 14.4 MB/s 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:01.990 00:31:20 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:01.990 00:31:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.990 00:31:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.990 00:31:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.990 00:31:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.990 00:31:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.251 { 00:06:02.251 "nbd_device": "/dev/nbd0", 00:06:02.251 "bdev_name": "Malloc0" 00:06:02.251 }, 00:06:02.251 { 00:06:02.251 "nbd_device": "/dev/nbd1", 00:06:02.251 "bdev_name": "Malloc1" 00:06:02.251 } 00:06:02.251 ]' 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.251 { 00:06:02.251 "nbd_device": "/dev/nbd0", 00:06:02.251 "bdev_name": "Malloc0" 00:06:02.251 }, 00:06:02.251 { 00:06:02.251 "nbd_device": "/dev/nbd1", 00:06:02.251 "bdev_name": "Malloc1" 00:06:02.251 } 00:06:02.251 ]' 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.251 /dev/nbd1' 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.251 /dev/nbd1' 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.251 256+0 records in 00:06:02.251 256+0 records out 00:06:02.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012396 s, 84.6 MB/s 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.251 256+0 records in 00:06:02.251 256+0 records out 00:06:02.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163039 s, 64.3 MB/s 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.251 256+0 records in 00:06:02.251 256+0 records out 00:06:02.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173616 s, 60.4 MB/s 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.251 00:31:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.513 00:31:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.773 00:31:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.773 00:31:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.034 00:31:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.034 [2024-06-08 00:31:21.273764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.294 [2024-06-08 00:31:21.336817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.294 [2024-06-08 00:31:21.336819] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.294 [2024-06-08 00:31:21.368268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.294 [2024-06-08 00:31:21.368300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.594 00:31:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 193453 /var/tmp/spdk-nbd.sock 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 193453 ']' 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:06.594 00:31:24 event.app_repeat -- event/event.sh@39 -- # killprocess 193453 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 193453 ']' 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 193453 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 193453 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 193453' 00:06:06.594 killing process with pid 193453 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@968 -- # kill 193453 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@973 -- # wait 193453 00:06:06.594 spdk_app_start is called in Round 0. 00:06:06.594 Shutdown signal received, stop current app iteration 00:06:06.594 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:06:06.594 spdk_app_start is called in Round 1. 00:06:06.594 Shutdown signal received, stop current app iteration 00:06:06.594 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:06:06.594 spdk_app_start is called in Round 2. 00:06:06.594 Shutdown signal received, stop current app iteration 00:06:06.594 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:06:06.594 spdk_app_start is called in Round 3. 00:06:06.594 Shutdown signal received, stop current app iteration 00:06:06.594 00:31:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:06.594 00:31:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:06.594 00:06:06.594 real 0m15.584s 00:06:06.594 user 0m33.624s 00:06:06.594 sys 0m2.081s 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:06.594 00:31:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.594 ************************************ 00:06:06.594 END TEST app_repeat 00:06:06.594 ************************************ 00:06:06.594 00:31:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:06.594 00:31:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:06.594 00:31:24 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:06.594 00:31:24 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:06.594 00:31:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.594 ************************************ 00:06:06.594 START TEST cpu_locks 00:06:06.594 ************************************ 00:06:06.594 00:31:24 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:06.594 * Looking for test storage... 00:06:06.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:06.594 00:31:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:06.594 00:31:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:06.594 00:31:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:06.594 00:31:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:06.594 00:31:24 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:06.594 00:31:24 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:06.594 00:31:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.594 ************************************ 00:06:06.594 START TEST default_locks 00:06:06.594 ************************************ 00:06:06.594 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:06:06.594 00:31:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=196958 00:06:06.594 00:31:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 196958 00:06:06.594 00:31:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.594 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 196958 ']' 00:06:06.594 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.594 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:06.594 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.594 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:06.594 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.594 [2024-06-08 00:31:24.745970] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:06.594 [2024-06-08 00:31:24.746032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196958 ] 00:06:06.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.594 [2024-06-08 00:31:24.809092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.868 [2024-06-08 00:31:24.883581] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 196958 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 196958 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.463 lslocks: write error 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 196958 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 196958 ']' 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 196958 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 196958 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 196958' 00:06:07.463 killing process with pid 196958 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 196958 00:06:07.463 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 196958 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 196958 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 196958 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 196958 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 196958 ']' 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:07.725 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (196958) - No such process 00:06:07.725 ERROR: process (pid: 196958) is no longer running 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.726 00:06:07.726 real 0m1.229s 00:06:07.726 user 0m1.291s 00:06:07.726 sys 0m0.400s 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:07.726 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.726 ************************************ 00:06:07.726 END TEST default_locks 00:06:07.726 ************************************ 00:06:07.726 00:31:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:07.726 00:31:25 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:07.726 00:31:25 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:07.726 00:31:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.726 ************************************ 00:06:07.726 START TEST default_locks_via_rpc 00:06:07.726 ************************************ 00:06:07.726 00:31:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:06:07.726 00:31:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=197142 00:06:07.726 00:31:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 197142 00:06:07.726 00:31:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.726 00:31:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 197142 ']' 00:06:07.726 00:31:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.726 00:31:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:07.726 00:31:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.726 00:31:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:07.726 00:31:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.987 [2024-06-08 00:31:26.049966] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:07.987 [2024-06-08 00:31:26.050021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197142 ] 00:06:07.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.987 [2024-06-08 00:31:26.114494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.987 [2024-06-08 00:31:26.185604] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.558 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 197142 00:06:08.819 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 197142 00:06:08.819 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 197142 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 197142 ']' 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 197142 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 197142 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 197142' 00:06:09.080 killing process with pid 197142 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 197142 00:06:09.080 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 197142 00:06:09.341 00:06:09.341 real 0m1.523s 00:06:09.341 user 0m1.626s 00:06:09.341 sys 0m0.516s 00:06:09.341 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.341 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.341 ************************************ 00:06:09.341 END TEST default_locks_via_rpc 00:06:09.341 ************************************ 00:06:09.341 00:31:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:09.341 00:31:27 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:09.341 00:31:27 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.341 00:31:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.341 ************************************ 00:06:09.341 START TEST non_locking_app_on_locked_coremask 00:06:09.341 ************************************ 00:06:09.341 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:06:09.341 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=197459 00:06:09.341 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 197459 /var/tmp/spdk.sock 00:06:09.341 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.341 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 197459 ']' 00:06:09.341 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.341 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:09.341 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.341 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:09.341 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.602 [2024-06-08 00:31:27.652706] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:09.602 [2024-06-08 00:31:27.652758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197459 ] 00:06:09.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.602 [2024-06-08 00:31:27.710772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.602 [2024-06-08 00:31:27.775406] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.173 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:10.173 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:10.174 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=197705 00:06:10.174 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 197705 /var/tmp/spdk2.sock 00:06:10.174 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 197705 ']' 00:06:10.174 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:10.174 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.174 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:10.174 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.174 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:10.174 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.434 [2024-06-08 00:31:28.474467] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:10.434 [2024-06-08 00:31:28.474519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197705 ] 00:06:10.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.434 [2024-06-08 00:31:28.562286] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.434 [2024-06-08 00:31:28.562314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.434 [2024-06-08 00:31:28.691947] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.005 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:11.005 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:11.005 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 197459 00:06:11.005 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 197459 00:06:11.005 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.580 lslocks: write error 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 197459 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 197459 ']' 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 197459 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 197459 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 197459' 00:06:11.580 killing process with pid 197459 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 197459 00:06:11.580 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 197459 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 197705 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 197705 ']' 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 197705 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 197705 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 197705' 00:06:12.151 killing process with pid 197705 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 197705 00:06:12.151 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 197705 00:06:12.412 00:06:12.412 real 0m2.913s 00:06:12.412 user 0m3.220s 00:06:12.412 sys 0m0.835s 00:06:12.412 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:12.412 00:31:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.412 ************************************ 00:06:12.412 END TEST non_locking_app_on_locked_coremask 00:06:12.412 ************************************ 00:06:12.412 00:31:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:12.412 00:31:30 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:12.412 00:31:30 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.412 00:31:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.412 ************************************ 00:06:12.412 START TEST locking_app_on_unlocked_coremask 00:06:12.412 ************************************ 00:06:12.412 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:06:12.412 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=198079 00:06:12.412 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 198079 /var/tmp/spdk.sock 00:06:12.412 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:12.412 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 198079 ']' 00:06:12.412 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.412 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:12.412 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.412 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:12.412 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.412 [2024-06-08 00:31:30.629337] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:12.412 [2024-06-08 00:31:30.629384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198079 ] 00:06:12.413 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.413 [2024-06-08 00:31:30.688519] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.413 [2024-06-08 00:31:30.688549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.672 [2024-06-08 00:31:30.754792] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=198387 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 198387 /var/tmp/spdk2.sock 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 198387 ']' 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:13.242 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.242 [2024-06-08 00:31:31.450009] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:13.242 [2024-06-08 00:31:31.450063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198387 ] 00:06:13.242 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.503 [2024-06-08 00:31:31.537329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.503 [2024-06-08 00:31:31.671001] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.074 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:14.074 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:14.074 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 198387 00:06:14.074 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 198387 00:06:14.074 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.645 lslocks: write error 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 198079 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 198079 ']' 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 198079 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 198079 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 198079' 00:06:14.645 killing process with pid 198079 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 198079 00:06:14.645 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 198079 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 198387 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 198387 ']' 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 198387 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 198387 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 198387' 00:06:15.216 killing process with pid 198387 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 198387 00:06:15.216 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 198387 00:06:15.476 00:06:15.476 real 0m2.969s 00:06:15.476 user 0m3.248s 00:06:15.476 sys 0m0.880s 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.476 ************************************ 00:06:15.476 END TEST locking_app_on_unlocked_coremask 00:06:15.476 ************************************ 00:06:15.476 00:31:33 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:15.476 00:31:33 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:15.476 00:31:33 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.476 00:31:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.476 ************************************ 00:06:15.476 START TEST locking_app_on_locked_coremask 00:06:15.476 ************************************ 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=198789 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 198789 /var/tmp/spdk.sock 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 198789 ']' 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:15.476 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.476 [2024-06-08 00:31:33.674097] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:15.476 [2024-06-08 00:31:33.674145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198789 ] 00:06:15.476 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.476 [2024-06-08 00:31:33.732664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.736 [2024-06-08 00:31:33.799510] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=198926 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 198926 /var/tmp/spdk2.sock 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 198926 /var/tmp/spdk2.sock 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 198926 /var/tmp/spdk2.sock 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 198926 ']' 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:16.308 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.308 [2024-06-08 00:31:34.482305] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:16.308 [2024-06-08 00:31:34.482352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198926 ] 00:06:16.308 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.308 [2024-06-08 00:31:34.568900] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 198789 has claimed it. 00:06:16.308 [2024-06-08 00:31:34.568939] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (198926) - No such process 00:06:16.880 ERROR: process (pid: 198926) is no longer running 00:06:16.880 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:16.880 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:16.880 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:16.880 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:16.880 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:16.880 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:16.880 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 198789 00:06:16.880 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 198789 00:06:16.880 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.451 lslocks: write error 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 198789 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 198789 ']' 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 198789 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 198789 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 198789' 00:06:17.451 killing process with pid 198789 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 198789 00:06:17.451 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 198789 00:06:17.712 00:06:17.712 real 0m2.156s 00:06:17.712 user 0m2.400s 00:06:17.712 sys 0m0.584s 00:06:17.712 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.712 00:31:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.712 ************************************ 00:06:17.712 END TEST locking_app_on_locked_coremask 00:06:17.712 ************************************ 00:06:17.712 00:31:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:17.712 00:31:35 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:17.712 00:31:35 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.712 00:31:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.712 ************************************ 00:06:17.712 START TEST locking_overlapped_coremask 00:06:17.712 ************************************ 00:06:17.712 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:06:17.712 00:31:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=199177 00:06:17.712 00:31:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 199177 /var/tmp/spdk.sock 00:06:17.712 00:31:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:17.712 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 199177 ']' 00:06:17.712 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.712 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:17.712 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.712 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:17.712 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.712 [2024-06-08 00:31:35.906074] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:17.712 [2024-06-08 00:31:35.906126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199177 ] 00:06:17.712 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.712 [2024-06-08 00:31:35.966630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.973 [2024-06-08 00:31:36.038018] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.973 [2024-06-08 00:31:36.038153] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.973 [2024-06-08 00:31:36.038156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=199498 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 199498 /var/tmp/spdk2.sock 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 199498 /var/tmp/spdk2.sock 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 199498 /var/tmp/spdk2.sock 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 199498 ']' 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:18.545 00:31:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.545 [2024-06-08 00:31:36.731335] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:18.545 [2024-06-08 00:31:36.731389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199498 ] 00:06:18.545 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.545 [2024-06-08 00:31:36.802816] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 199177 has claimed it. 00:06:18.545 [2024-06-08 00:31:36.802846] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (199498) - No such process 00:06:19.117 ERROR: process (pid: 199498) is no longer running 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 199177 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 199177 ']' 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 199177 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 199177 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 199177' 00:06:19.117 killing process with pid 199177 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 199177 00:06:19.117 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 199177 00:06:19.378 00:06:19.378 real 0m1.745s 00:06:19.378 user 0m4.949s 00:06:19.378 sys 0m0.357s 00:06:19.378 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:19.378 00:31:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.378 ************************************ 00:06:19.378 END TEST locking_overlapped_coremask 00:06:19.378 ************************************ 00:06:19.378 00:31:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:19.378 00:31:37 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:19.378 00:31:37 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:19.378 00:31:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.639 ************************************ 00:06:19.639 START TEST locking_overlapped_coremask_via_rpc 00:06:19.639 ************************************ 00:06:19.639 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:19.639 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=199603 00:06:19.639 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 199603 /var/tmp/spdk.sock 00:06:19.639 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:19.639 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 199603 ']' 00:06:19.639 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.639 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:19.639 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.639 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:19.639 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.639 [2024-06-08 00:31:37.728280] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:19.639 [2024-06-08 00:31:37.728334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199603 ] 00:06:19.639 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.639 [2024-06-08 00:31:37.792020] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.639 [2024-06-08 00:31:37.792059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.639 [2024-06-08 00:31:37.865357] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.639 [2024-06-08 00:31:37.865479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.639 [2024-06-08 00:31:37.865476] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=199869 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 199869 /var/tmp/spdk2.sock 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 199869 ']' 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:20.582 00:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.582 [2024-06-08 00:31:38.550464] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:20.582 [2024-06-08 00:31:38.550515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199869 ] 00:06:20.582 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.582 [2024-06-08 00:31:38.622775] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.582 [2024-06-08 00:31:38.622800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.582 [2024-06-08 00:31:38.732690] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.582 [2024-06-08 00:31:38.732846] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.582 [2024-06-08 00:31:38.732848] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.155 [2024-06-08 00:31:39.324462] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 199603 has claimed it. 00:06:21.155 request: 00:06:21.155 { 00:06:21.155 "method": "framework_enable_cpumask_locks", 00:06:21.155 "req_id": 1 00:06:21.155 } 00:06:21.155 Got JSON-RPC error response 00:06:21.155 response: 00:06:21.155 { 00:06:21.155 "code": -32603, 00:06:21.155 "message": "Failed to claim CPU core: 2" 00:06:21.155 } 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 199603 /var/tmp/spdk.sock 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 199603 ']' 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:21.155 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 199869 /var/tmp/spdk2.sock 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 199869 ']' 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.416 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.416 00:06:21.416 real 0m1.993s 00:06:21.416 user 0m0.764s 00:06:21.416 sys 0m0.154s 00:06:21.417 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.417 00:31:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.417 ************************************ 00:06:21.417 END TEST locking_overlapped_coremask_via_rpc 00:06:21.417 ************************************ 00:06:21.706 00:31:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:21.706 00:31:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 199603 ]] 00:06:21.706 00:31:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 199603 00:06:21.706 00:31:39 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 199603 ']' 00:06:21.706 00:31:39 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 199603 00:06:21.706 00:31:39 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:21.706 00:31:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:21.706 00:31:39 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 199603 00:06:21.706 00:31:39 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:21.706 00:31:39 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:21.706 00:31:39 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 199603' 00:06:21.706 killing process with pid 199603 00:06:21.706 00:31:39 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 199603 00:06:21.706 00:31:39 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 199603 00:06:21.967 00:31:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 199869 ]] 00:06:21.967 00:31:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 199869 00:06:21.967 00:31:39 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 199869 ']' 00:06:21.967 00:31:39 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 199869 00:06:21.967 00:31:39 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:21.967 00:31:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:21.967 00:31:39 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 199869 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 199869' 00:06:21.967 killing process with pid 199869 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 199869 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 199869 00:06:21.967 00:31:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.967 00:31:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:21.967 00:31:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 199603 ]] 00:06:21.967 00:31:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 199603 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 199603 ']' 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 199603 00:06:21.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (199603) - No such process 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 199603 is not found' 00:06:21.967 Process with pid 199603 is not found 00:06:21.967 00:31:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 199869 ]] 00:06:21.967 00:31:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 199869 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 199869 ']' 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 199869 00:06:21.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (199869) - No such process 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 199869 is not found' 00:06:21.967 Process with pid 199869 is not found 00:06:21.967 00:31:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.967 00:06:21.967 real 0m15.685s 00:06:21.967 user 0m27.006s 00:06:21.967 sys 0m4.599s 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.967 00:31:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.967 ************************************ 00:06:21.967 END TEST cpu_locks 00:06:21.967 ************************************ 00:06:22.228 00:06:22.228 real 0m41.457s 00:06:22.228 user 1m21.580s 00:06:22.228 sys 0m7.654s 00:06:22.228 00:31:40 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.228 00:31:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.228 ************************************ 00:06:22.228 END TEST event 00:06:22.228 ************************************ 00:06:22.228 00:31:40 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:22.228 00:31:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:22.228 00:31:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.228 00:31:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.228 ************************************ 00:06:22.228 START TEST thread 00:06:22.228 ************************************ 00:06:22.228 00:31:40 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:22.228 * Looking for test storage... 00:06:22.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:22.228 00:31:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.228 00:31:40 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:22.228 00:31:40 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.228 00:31:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.229 ************************************ 00:06:22.229 START TEST thread_poller_perf 00:06:22.229 ************************************ 00:06:22.229 00:31:40 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.229 [2024-06-08 00:31:40.471525] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:22.229 [2024-06-08 00:31:40.471569] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200310 ] 00:06:22.229 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.489 [2024-06-08 00:31:40.525618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.489 [2024-06-08 00:31:40.593485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.489 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:23.429 ====================================== 00:06:23.429 busy:2407750802 (cyc) 00:06:23.429 total_run_count: 283000 00:06:23.429 tsc_hz: 2400000000 (cyc) 00:06:23.429 ====================================== 00:06:23.429 poller_cost: 8507 (cyc), 3544 (nsec) 00:06:23.429 00:06:23.429 real 0m1.190s 00:06:23.429 user 0m1.125s 00:06:23.429 sys 0m0.060s 00:06:23.429 00:31:41 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:23.429 00:31:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.429 ************************************ 00:06:23.429 END TEST thread_poller_perf 00:06:23.429 ************************************ 00:06:23.429 00:31:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.430 00:31:41 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:23.430 00:31:41 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:23.430 00:31:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.689 ************************************ 00:06:23.689 START TEST thread_poller_perf 00:06:23.689 ************************************ 00:06:23.689 00:31:41 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.689 [2024-06-08 00:31:41.749176] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:23.689 [2024-06-08 00:31:41.749266] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200663 ] 00:06:23.689 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.689 [2024-06-08 00:31:41.814755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.689 [2024-06-08 00:31:41.882474] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.689 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.071 ====================================== 00:06:25.071 busy:2402011378 (cyc) 00:06:25.071 total_run_count: 3814000 00:06:25.071 tsc_hz: 2400000000 (cyc) 00:06:25.071 ====================================== 00:06:25.071 poller_cost: 629 (cyc), 262 (nsec) 00:06:25.071 00:06:25.071 real 0m1.210s 00:06:25.071 user 0m1.132s 00:06:25.071 sys 0m0.074s 00:06:25.072 00:31:42 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.072 00:31:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.072 ************************************ 00:06:25.072 END TEST thread_poller_perf 00:06:25.072 ************************************ 00:06:25.072 00:31:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:25.072 00:06:25.072 real 0m2.645s 00:06:25.072 user 0m2.354s 00:06:25.072 sys 0m0.295s 00:06:25.072 00:31:42 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.072 00:31:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.072 ************************************ 00:06:25.072 END TEST thread 00:06:25.072 ************************************ 00:06:25.072 00:31:43 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:25.072 00:31:43 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:25.072 00:31:43 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.072 00:31:43 -- common/autotest_common.sh@10 -- # set +x 00:06:25.072 ************************************ 00:06:25.072 START TEST accel 00:06:25.072 ************************************ 00:06:25.072 00:31:43 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:25.072 * Looking for test storage... 00:06:25.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:25.072 00:31:43 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:25.072 00:31:43 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:25.072 00:31:43 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:25.072 00:31:43 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=200990 00:06:25.072 00:31:43 accel -- accel/accel.sh@63 -- # waitforlisten 200990 00:06:25.072 00:31:43 accel -- common/autotest_common.sh@830 -- # '[' -z 200990 ']' 00:06:25.072 00:31:43 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.072 00:31:43 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:25.072 00:31:43 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.072 00:31:43 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:25.072 00:31:43 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:25.072 00:31:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.072 00:31:43 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:25.072 00:31:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.072 00:31:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.072 00:31:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.072 00:31:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.072 00:31:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.072 00:31:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:25.072 00:31:43 accel -- accel/accel.sh@41 -- # jq -r . 00:06:25.072 [2024-06-08 00:31:43.216666] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:25.072 [2024-06-08 00:31:43.216742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200990 ] 00:06:25.072 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.072 [2024-06-08 00:31:43.283035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.331 [2024-06-08 00:31:43.356703] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.900 00:31:43 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:25.900 00:31:43 accel -- common/autotest_common.sh@863 -- # return 0 00:06:25.900 00:31:43 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:25.900 00:31:43 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:25.900 00:31:43 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:25.900 00:31:43 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:25.900 00:31:43 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:25.900 00:31:43 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:25.900 00:31:43 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:25.900 00:31:43 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:25.900 00:31:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.900 00:31:43 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:25.900 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.900 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.900 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.900 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.900 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.900 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.900 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.900 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.900 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.900 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.900 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.900 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.900 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.900 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.901 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.901 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.901 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.901 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.901 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.901 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.901 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.901 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.901 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.901 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.901 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.901 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.901 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.901 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.901 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.901 00:31:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.901 00:31:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.901 00:31:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.901 00:31:44 accel -- accel/accel.sh@75 -- # killprocess 200990 00:06:25.901 00:31:44 accel -- common/autotest_common.sh@949 -- # '[' -z 200990 ']' 00:06:25.901 00:31:44 accel -- common/autotest_common.sh@953 -- # kill -0 200990 00:06:25.901 00:31:44 accel -- common/autotest_common.sh@954 -- # uname 00:06:25.901 00:31:44 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:25.901 00:31:44 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 200990 00:06:25.901 00:31:44 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:25.901 00:31:44 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:25.901 00:31:44 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 200990' 00:06:25.901 killing process with pid 200990 00:06:25.901 00:31:44 accel -- common/autotest_common.sh@968 -- # kill 200990 00:06:25.901 00:31:44 accel -- common/autotest_common.sh@973 -- # wait 200990 00:06:26.161 00:31:44 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:26.161 00:31:44 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:26.161 00:31:44 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:26.161 00:31:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.161 00:31:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.161 00:31:44 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:26.161 00:31:44 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:26.161 00:31:44 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:26.161 00:31:44 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.161 00:31:44 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.161 00:31:44 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.161 00:31:44 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.161 00:31:44 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.161 00:31:44 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:26.161 00:31:44 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:26.161 00:31:44 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.161 00:31:44 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:26.161 00:31:44 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:26.161 00:31:44 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:26.161 00:31:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.161 00:31:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.421 ************************************ 00:06:26.421 START TEST accel_missing_filename 00:06:26.421 ************************************ 00:06:26.421 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:26.421 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:26.421 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:26.421 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:26.421 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.421 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:26.421 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.421 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:26.421 00:31:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:26.421 00:31:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:26.421 00:31:44 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.421 00:31:44 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.421 00:31:44 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.421 00:31:44 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.421 00:31:44 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.421 00:31:44 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:26.421 00:31:44 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:26.421 [2024-06-08 00:31:44.480134] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:26.421 [2024-06-08 00:31:44.480225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201172 ] 00:06:26.421 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.421 [2024-06-08 00:31:44.541995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.421 [2024-06-08 00:31:44.613804] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.421 [2024-06-08 00:31:44.645598] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.421 [2024-06-08 00:31:44.682426] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:26.681 A filename is required. 00:06:26.681 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:26.681 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:26.681 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:26.681 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:26.681 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:26.681 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:26.682 00:06:26.682 real 0m0.287s 00:06:26.682 user 0m0.222s 00:06:26.682 sys 0m0.107s 00:06:26.682 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.682 00:31:44 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:26.682 ************************************ 00:06:26.682 END TEST accel_missing_filename 00:06:26.682 ************************************ 00:06:26.682 00:31:44 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.682 00:31:44 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:26.682 00:31:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.682 00:31:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.682 ************************************ 00:06:26.682 START TEST accel_compress_verify 00:06:26.682 ************************************ 00:06:26.682 00:31:44 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.682 00:31:44 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:26.682 00:31:44 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.682 00:31:44 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:26.682 00:31:44 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.682 00:31:44 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:26.682 00:31:44 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.682 00:31:44 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.682 00:31:44 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.682 00:31:44 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:26.682 00:31:44 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.682 00:31:44 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.682 00:31:44 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.682 00:31:44 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.682 00:31:44 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.682 00:31:44 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:26.682 00:31:44 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:26.682 [2024-06-08 00:31:44.841248] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:26.682 [2024-06-08 00:31:44.841321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201378 ] 00:06:26.682 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.682 [2024-06-08 00:31:44.904129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.942 [2024-06-08 00:31:44.975382] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.942 [2024-06-08 00:31:45.007352] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.942 [2024-06-08 00:31:45.044151] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:26.942 00:06:26.942 Compression does not support the verify option, aborting. 00:06:26.942 00:31:45 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:26.942 00:31:45 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:26.942 00:31:45 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:26.942 00:31:45 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:26.942 00:31:45 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:26.942 00:31:45 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:26.942 00:06:26.942 real 0m0.286s 00:06:26.942 user 0m0.219s 00:06:26.942 sys 0m0.107s 00:06:26.942 00:31:45 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.942 00:31:45 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.942 ************************************ 00:06:26.942 END TEST accel_compress_verify 00:06:26.942 ************************************ 00:06:26.942 00:31:45 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:26.942 00:31:45 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:26.942 00:31:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.942 00:31:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.942 ************************************ 00:06:26.942 START TEST accel_wrong_workload 00:06:26.942 ************************************ 00:06:26.942 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:26.943 00:31:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:26.943 00:31:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:26.943 00:31:45 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.943 00:31:45 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.943 00:31:45 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.943 00:31:45 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.943 00:31:45 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.943 00:31:45 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:26.943 00:31:45 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:26.943 Unsupported workload type: foobar 00:06:26.943 [2024-06-08 00:31:45.203296] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:26.943 accel_perf options: 00:06:26.943 [-h help message] 00:06:26.943 [-q queue depth per core] 00:06:26.943 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:26.943 [-T number of threads per core 00:06:26.943 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:26.943 [-t time in seconds] 00:06:26.943 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:26.943 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:26.943 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:26.943 [-l for compress/decompress workloads, name of uncompressed input file 00:06:26.943 [-S for crc32c workload, use this seed value (default 0) 00:06:26.943 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:26.943 [-f for fill workload, use this BYTE value (default 255) 00:06:26.943 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:26.943 [-y verify result if this switch is on] 00:06:26.943 [-a tasks to allocate per core (default: same value as -q)] 00:06:26.943 Can be used to spread operations across a wider range of memory. 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:26.943 00:06:26.943 real 0m0.035s 00:06:26.943 user 0m0.017s 00:06:26.943 sys 0m0.018s 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.943 00:31:45 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:26.943 ************************************ 00:06:26.943 END TEST accel_wrong_workload 00:06:26.943 ************************************ 00:06:26.943 Error: writing output failed: Broken pipe 00:06:27.203 00:31:45 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:27.203 00:31:45 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:27.203 00:31:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.203 00:31:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.203 ************************************ 00:06:27.203 START TEST accel_negative_buffers 00:06:27.203 ************************************ 00:06:27.203 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:27.203 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:27.203 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:27.203 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:27.203 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:27.203 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:27.203 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:27.203 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:27.203 00:31:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:27.203 00:31:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:27.203 00:31:45 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.203 00:31:45 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.203 00:31:45 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.203 00:31:45 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.203 00:31:45 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.203 00:31:45 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:27.203 00:31:45 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:27.204 -x option must be non-negative. 00:06:27.204 [2024-06-08 00:31:45.297636] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:27.204 accel_perf options: 00:06:27.204 [-h help message] 00:06:27.204 [-q queue depth per core] 00:06:27.204 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:27.204 [-T number of threads per core 00:06:27.204 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:27.204 [-t time in seconds] 00:06:27.204 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:27.204 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:27.204 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:27.204 [-l for compress/decompress workloads, name of uncompressed input file 00:06:27.204 [-S for crc32c workload, use this seed value (default 0) 00:06:27.204 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:27.204 [-f for fill workload, use this BYTE value (default 255) 00:06:27.204 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:27.204 [-y verify result if this switch is on] 00:06:27.204 [-a tasks to allocate per core (default: same value as -q)] 00:06:27.204 Can be used to spread operations across a wider range of memory. 00:06:27.204 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:27.204 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:27.204 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:27.204 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:27.204 00:06:27.204 real 0m0.020s 00:06:27.204 user 0m0.010s 00:06:27.204 sys 0m0.010s 00:06:27.204 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:27.204 00:31:45 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:27.204 ************************************ 00:06:27.204 END TEST accel_negative_buffers 00:06:27.204 ************************************ 00:06:27.204 Error: writing output failed: Broken pipe 00:06:27.204 00:31:45 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:27.204 00:31:45 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:27.204 00:31:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.204 00:31:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.204 ************************************ 00:06:27.204 START TEST accel_crc32c 00:06:27.204 ************************************ 00:06:27.204 00:31:45 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:27.204 00:31:45 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:27.204 [2024-06-08 00:31:45.404206] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:27.204 [2024-06-08 00:31:45.404290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201511 ] 00:06:27.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.204 [2024-06-08 00:31:45.473814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.464 [2024-06-08 00:31:45.539757] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.464 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.465 00:31:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:28.406 00:31:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.406 00:06:28.406 real 0m1.292s 00:06:28.406 user 0m1.199s 00:06:28.406 sys 0m0.103s 00:06:28.406 00:31:46 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:28.406 00:31:46 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:28.406 ************************************ 00:06:28.406 END TEST accel_crc32c 00:06:28.406 ************************************ 00:06:28.668 00:31:46 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:28.668 00:31:46 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:28.668 00:31:46 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:28.668 00:31:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.668 ************************************ 00:06:28.668 START TEST accel_crc32c_C2 00:06:28.668 ************************************ 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:28.668 [2024-06-08 00:31:46.773851] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:28.668 [2024-06-08 00:31:46.773941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201864 ] 00:06:28.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.668 [2024-06-08 00:31:46.834984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.668 [2024-06-08 00:31:46.901554] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.668 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.669 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.053 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.054 00:06:30.054 real 0m1.286s 00:06:30.054 user 0m1.198s 00:06:30.054 sys 0m0.099s 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.054 00:31:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:30.054 ************************************ 00:06:30.054 END TEST accel_crc32c_C2 00:06:30.054 ************************************ 00:06:30.054 00:31:48 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:30.054 00:31:48 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:30.054 00:31:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.054 00:31:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.054 ************************************ 00:06:30.054 START TEST accel_copy 00:06:30.054 ************************************ 00:06:30.054 00:31:48 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:30.054 [2024-06-08 00:31:48.134930] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:30.054 [2024-06-08 00:31:48.135010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202075 ] 00:06:30.054 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.054 [2024-06-08 00:31:48.197025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.054 [2024-06-08 00:31:48.263366] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.054 00:31:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:31.439 00:31:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.439 00:06:31.439 real 0m1.285s 00:06:31.439 user 0m1.193s 00:06:31.439 sys 0m0.103s 00:06:31.439 00:31:49 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:31.439 00:31:49 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:31.439 ************************************ 00:06:31.439 END TEST accel_copy 00:06:31.439 ************************************ 00:06:31.439 00:31:49 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.439 00:31:49 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:31.439 00:31:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:31.439 00:31:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.439 ************************************ 00:06:31.439 START TEST accel_fill 00:06:31.439 ************************************ 00:06:31.439 00:31:49 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:31.439 [2024-06-08 00:31:49.498073] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:31.439 [2024-06-08 00:31:49.498164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202268 ] 00:06:31.439 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.439 [2024-06-08 00:31:49.558654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.439 [2024-06-08 00:31:49.624000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.439 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.440 00:31:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:32.825 00:31:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.825 00:06:32.825 real 0m1.285s 00:06:32.825 user 0m1.194s 00:06:32.825 sys 0m0.101s 00:06:32.825 00:31:50 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.825 00:31:50 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:32.825 ************************************ 00:06:32.825 END TEST accel_fill 00:06:32.825 ************************************ 00:06:32.825 00:31:50 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:32.825 00:31:50 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:32.825 00:31:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:32.825 00:31:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.825 ************************************ 00:06:32.825 START TEST accel_copy_crc32c 00:06:32.825 ************************************ 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:32.825 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:32.825 [2024-06-08 00:31:50.858329] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:32.825 [2024-06-08 00:31:50.858427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202605 ] 00:06:32.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.825 [2024-06-08 00:31:50.919661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.825 [2024-06-08 00:31:50.984875] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.825 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.826 00:31:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.212 00:06:34.212 real 0m1.284s 00:06:34.212 user 0m1.200s 00:06:34.212 sys 0m0.096s 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.212 00:31:52 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:34.212 ************************************ 00:06:34.212 END TEST accel_copy_crc32c 00:06:34.212 ************************************ 00:06:34.212 00:31:52 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:34.212 00:31:52 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:34.212 00:31:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.212 00:31:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.212 ************************************ 00:06:34.212 START TEST accel_copy_crc32c_C2 00:06:34.212 ************************************ 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:34.212 [2024-06-08 00:31:52.216038] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:34.212 [2024-06-08 00:31:52.216128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202952 ] 00:06:34.212 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.212 [2024-06-08 00:31:52.275838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.212 [2024-06-08 00:31:52.339929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.212 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.213 00:31:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.596 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.596 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.597 00:06:35.597 real 0m1.280s 00:06:35.597 user 0m1.198s 00:06:35.597 sys 0m0.094s 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.597 00:31:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:35.597 ************************************ 00:06:35.597 END TEST accel_copy_crc32c_C2 00:06:35.597 ************************************ 00:06:35.597 00:31:53 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:35.597 00:31:53 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:35.597 00:31:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.597 00:31:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.597 ************************************ 00:06:35.597 START TEST accel_dualcast 00:06:35.597 ************************************ 00:06:35.597 00:31:53 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:35.597 [2024-06-08 00:31:53.573027] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:35.597 [2024-06-08 00:31:53.573088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203301 ] 00:06:35.597 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.597 [2024-06-08 00:31:53.634484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.597 [2024-06-08 00:31:53.702229] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.597 00:31:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:36.983 00:31:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.983 00:06:36.983 real 0m1.288s 00:06:36.983 user 0m1.194s 00:06:36.983 sys 0m0.104s 00:06:36.983 00:31:54 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:36.983 00:31:54 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:36.983 ************************************ 00:06:36.983 END TEST accel_dualcast 00:06:36.983 ************************************ 00:06:36.983 00:31:54 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:36.983 00:31:54 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:36.983 00:31:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:36.983 00:31:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.983 ************************************ 00:06:36.983 START TEST accel_compare 00:06:36.983 ************************************ 00:06:36.983 00:31:54 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:36.983 00:31:54 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:36.983 [2024-06-08 00:31:54.937393] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:36.983 [2024-06-08 00:31:54.937480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203546 ] 00:06:36.983 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.983 [2024-06-08 00:31:55.000601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.984 [2024-06-08 00:31:55.073258] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.984 00:31:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:37.926 00:31:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.926 00:06:37.926 real 0m1.293s 00:06:37.926 user 0m1.203s 00:06:37.926 sys 0m0.101s 00:06:37.926 00:31:56 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.926 00:31:56 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:37.926 ************************************ 00:06:37.926 END TEST accel_compare 00:06:37.926 ************************************ 00:06:38.190 00:31:56 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:38.190 00:31:56 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:38.190 00:31:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.190 00:31:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.190 ************************************ 00:06:38.190 START TEST accel_xor 00:06:38.190 ************************************ 00:06:38.190 00:31:56 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:38.190 [2024-06-08 00:31:56.307580] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:38.190 [2024-06-08 00:31:56.307666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203735 ] 00:06:38.190 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.190 [2024-06-08 00:31:56.369091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.190 [2024-06-08 00:31:56.434654] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.190 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.485 00:31:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.427 00:06:39.427 real 0m1.286s 00:06:39.427 user 0m1.195s 00:06:39.427 sys 0m0.101s 00:06:39.427 00:31:57 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.427 00:31:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:39.427 ************************************ 00:06:39.427 END TEST accel_xor 00:06:39.427 ************************************ 00:06:39.427 00:31:57 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:39.427 00:31:57 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:39.427 00:31:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.427 00:31:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.427 ************************************ 00:06:39.427 START TEST accel_xor 00:06:39.427 ************************************ 00:06:39.427 00:31:57 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:39.427 00:31:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:39.427 [2024-06-08 00:31:57.669029] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:39.427 [2024-06-08 00:31:57.669121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204042 ] 00:06:39.427 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.688 [2024-06-08 00:31:57.730035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.688 [2024-06-08 00:31:57.796676] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.688 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:41.073 00:31:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.073 00:06:41.073 real 0m1.287s 00:06:41.073 user 0m1.200s 00:06:41.073 sys 0m0.098s 00:06:41.073 00:31:58 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:41.073 00:31:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:41.073 ************************************ 00:06:41.074 END TEST accel_xor 00:06:41.074 ************************************ 00:06:41.074 00:31:58 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:41.074 00:31:58 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:41.074 00:31:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.074 00:31:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.074 ************************************ 00:06:41.074 START TEST accel_dif_verify 00:06:41.074 ************************************ 00:06:41.074 00:31:58 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:41.074 [2024-06-08 00:31:59.029676] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:41.074 [2024-06-08 00:31:59.029736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204395 ] 00:06:41.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.074 [2024-06-08 00:31:59.089873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.074 [2024-06-08 00:31:59.154816] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.074 00:31:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:42.016 00:32:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.016 00:06:42.016 real 0m1.285s 00:06:42.016 user 0m1.205s 00:06:42.016 sys 0m0.092s 00:06:42.016 00:32:00 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:42.016 00:32:00 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:42.016 ************************************ 00:06:42.016 END TEST accel_dif_verify 00:06:42.016 ************************************ 00:06:42.278 00:32:00 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:42.278 00:32:00 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:42.278 00:32:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:42.278 00:32:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.278 ************************************ 00:06:42.278 START TEST accel_dif_generate 00:06:42.278 ************************************ 00:06:42.278 00:32:00 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:42.278 [2024-06-08 00:32:00.388094] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:42.278 [2024-06-08 00:32:00.388159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204744 ] 00:06:42.278 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.278 [2024-06-08 00:32:00.448456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.278 [2024-06-08 00:32:00.514868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.278 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.538 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.538 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.538 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.538 00:32:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.538 00:32:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.538 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.538 00:32:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:43.480 00:32:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.480 00:06:43.480 real 0m1.283s 00:06:43.480 user 0m1.196s 00:06:43.480 sys 0m0.100s 00:06:43.480 00:32:01 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:43.480 00:32:01 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:43.480 ************************************ 00:06:43.480 END TEST accel_dif_generate 00:06:43.480 ************************************ 00:06:43.480 00:32:01 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:43.480 00:32:01 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:43.480 00:32:01 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:43.480 00:32:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.480 ************************************ 00:06:43.480 START TEST accel_dif_generate_copy 00:06:43.480 ************************************ 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:43.480 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:43.480 [2024-06-08 00:32:01.749806] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:43.480 [2024-06-08 00:32:01.749895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205027 ] 00:06:43.853 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.853 [2024-06-08 00:32:01.813381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.853 [2024-06-08 00:32:01.883722] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.853 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.854 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.792 00:06:44.792 real 0m1.293s 00:06:44.792 user 0m1.192s 00:06:44.792 sys 0m0.113s 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.792 00:32:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:44.792 ************************************ 00:06:44.792 END TEST accel_dif_generate_copy 00:06:44.792 ************************************ 00:06:44.792 00:32:03 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:44.792 00:32:03 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.792 00:32:03 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:44.792 00:32:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.792 00:32:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.052 ************************************ 00:06:45.052 START TEST accel_comp 00:06:45.052 ************************************ 00:06:45.052 00:32:03 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:45.052 [2024-06-08 00:32:03.120569] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:45.052 [2024-06-08 00:32:03.120673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205223 ] 00:06:45.052 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.052 [2024-06-08 00:32:03.187483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.052 [2024-06-08 00:32:03.259174] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.052 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.053 00:32:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:46.433 00:32:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.433 00:06:46.433 real 0m1.300s 00:06:46.433 user 0m1.198s 00:06:46.433 sys 0m0.114s 00:06:46.433 00:32:04 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.433 00:32:04 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:46.433 ************************************ 00:06:46.433 END TEST accel_comp 00:06:46.433 ************************************ 00:06:46.433 00:32:04 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.433 00:32:04 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:46.433 00:32:04 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.433 00:32:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.433 ************************************ 00:06:46.433 START TEST accel_decomp 00:06:46.433 ************************************ 00:06:46.433 00:32:04 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:46.434 [2024-06-08 00:32:04.494185] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:46.434 [2024-06-08 00:32:04.494252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205484 ] 00:06:46.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.434 [2024-06-08 00:32:04.554931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.434 [2024-06-08 00:32:04.621573] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.434 00:32:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.815 00:32:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.815 00:06:47.815 real 0m1.287s 00:06:47.815 user 0m1.206s 00:06:47.815 sys 0m0.094s 00:06:47.815 00:32:05 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:47.815 00:32:05 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:47.815 ************************************ 00:06:47.815 END TEST accel_decomp 00:06:47.815 ************************************ 00:06:47.815 00:32:05 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:47.816 00:32:05 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:47.816 00:32:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:47.816 00:32:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.816 ************************************ 00:06:47.816 START TEST accel_decomp_full 00:06:47.816 ************************************ 00:06:47.816 00:32:05 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:47.816 00:32:05 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:47.816 [2024-06-08 00:32:05.859312] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:47.816 [2024-06-08 00:32:05.859385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205833 ] 00:06:47.816 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.816 [2024-06-08 00:32:05.919945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.816 [2024-06-08 00:32:05.986050] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.816 00:32:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:49.200 00:32:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.200 00:06:49.200 real 0m1.295s 00:06:49.200 user 0m1.212s 00:06:49.200 sys 0m0.096s 00:06:49.200 00:32:07 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:49.200 00:32:07 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:49.200 ************************************ 00:06:49.200 END TEST accel_decomp_full 00:06:49.200 ************************************ 00:06:49.200 00:32:07 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:49.200 00:32:07 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:49.200 00:32:07 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:49.200 00:32:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.200 ************************************ 00:06:49.200 START TEST accel_decomp_mcore 00:06:49.200 ************************************ 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:49.200 [2024-06-08 00:32:07.230398] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:49.200 [2024-06-08 00:32:07.230498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206189 ] 00:06:49.200 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.200 [2024-06-08 00:32:07.291930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.200 [2024-06-08 00:32:07.359210] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.200 [2024-06-08 00:32:07.359326] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.200 [2024-06-08 00:32:07.359481] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.200 [2024-06-08 00:32:07.359481] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.200 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.201 00:32:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.583 00:06:50.583 real 0m1.297s 00:06:50.583 user 0m4.442s 00:06:50.583 sys 0m0.101s 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:50.583 00:32:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:50.583 ************************************ 00:06:50.583 END TEST accel_decomp_mcore 00:06:50.583 ************************************ 00:06:50.583 00:32:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.583 00:32:08 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:50.583 00:32:08 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:50.583 00:32:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.583 ************************************ 00:06:50.583 START TEST accel_decomp_full_mcore 00:06:50.583 ************************************ 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:50.583 [2024-06-08 00:32:08.604259] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:50.583 [2024-06-08 00:32:08.604334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206484 ] 00:06:50.583 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.583 [2024-06-08 00:32:08.669638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.583 [2024-06-08 00:32:08.742352] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.583 [2024-06-08 00:32:08.742486] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.583 [2024-06-08 00:32:08.742544] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.583 [2024-06-08 00:32:08.742545] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.583 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.584 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.967 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.968 00:06:51.968 real 0m1.320s 00:06:51.968 user 0m4.495s 00:06:51.968 sys 0m0.107s 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.968 00:32:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:51.968 ************************************ 00:06:51.968 END TEST accel_decomp_full_mcore 00:06:51.968 ************************************ 00:06:51.968 00:32:09 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:51.968 00:32:09 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:51.968 00:32:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.968 00:32:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.968 ************************************ 00:06:51.968 START TEST accel_decomp_mthread 00:06:51.968 ************************************ 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:51.968 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:51.968 [2024-06-08 00:32:10.002187] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:51.968 [2024-06-08 00:32:10.002273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206679 ] 00:06:51.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.968 [2024-06-08 00:32:10.078928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.968 [2024-06-08 00:32:10.152611] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.968 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.969 00:32:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.352 00:06:53.352 real 0m1.319s 00:06:53.352 user 0m1.205s 00:06:53.352 sys 0m0.125s 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:53.352 00:32:11 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:53.352 ************************************ 00:06:53.352 END TEST accel_decomp_mthread 00:06:53.352 ************************************ 00:06:53.352 00:32:11 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:53.352 00:32:11 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:53.352 00:32:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:53.352 00:32:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.352 ************************************ 00:06:53.352 START TEST accel_decomp_full_mthread 00:06:53.352 ************************************ 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:53.352 [2024-06-08 00:32:11.396230] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:53.352 [2024-06-08 00:32:11.396317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206933 ] 00:06:53.352 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.352 [2024-06-08 00:32:11.458254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.352 [2024-06-08 00:32:11.525593] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.352 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.353 00:32:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.738 00:06:54.738 real 0m1.323s 00:06:54.738 user 0m1.235s 00:06:54.738 sys 0m0.100s 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:54.738 00:32:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:54.738 ************************************ 00:06:54.738 END TEST accel_decomp_full_mthread 00:06:54.738 ************************************ 00:06:54.738 00:32:12 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:54.738 00:32:12 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:54.738 00:32:12 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:54.738 00:32:12 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:54.738 00:32:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:54.738 00:32:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.738 00:32:12 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.738 00:32:12 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.738 00:32:12 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.738 00:32:12 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.738 00:32:12 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.738 00:32:12 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:54.738 00:32:12 accel -- accel/accel.sh@41 -- # jq -r . 00:06:54.738 ************************************ 00:06:54.738 START TEST accel_dif_functional_tests 00:06:54.738 ************************************ 00:06:54.738 00:32:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:54.738 [2024-06-08 00:32:12.823098] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:54.738 [2024-06-08 00:32:12.823148] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid207285 ] 00:06:54.738 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.738 [2024-06-08 00:32:12.883732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.738 [2024-06-08 00:32:12.956810] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.738 [2024-06-08 00:32:12.956931] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.738 [2024-06-08 00:32:12.956934] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.738 00:06:54.738 00:06:54.738 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.738 http://cunit.sourceforge.net/ 00:06:54.738 00:06:54.738 00:06:54.738 Suite: accel_dif 00:06:54.738 Test: verify: DIF generated, GUARD check ...passed 00:06:54.738 Test: verify: DIF generated, APPTAG check ...passed 00:06:54.738 Test: verify: DIF generated, REFTAG check ...passed 00:06:54.738 Test: verify: DIF not generated, GUARD check ...[2024-06-08 00:32:13.012397] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:54.738 passed 00:06:54.738 Test: verify: DIF not generated, APPTAG check ...[2024-06-08 00:32:13.012447] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:54.738 passed 00:06:54.739 Test: verify: DIF not generated, REFTAG check ...[2024-06-08 00:32:13.012469] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:54.739 passed 00:06:54.739 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:54.739 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-08 00:32:13.012518] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:54.739 passed 00:06:54.739 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:54.739 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:54.739 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:54.739 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-08 00:32:13.012630] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:54.739 passed 00:06:54.739 Test: verify copy: DIF generated, GUARD check ...passed 00:06:54.739 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:54.739 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:54.739 Test: verify copy: DIF not generated, GUARD check ...[2024-06-08 00:32:13.012750] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:54.739 passed 00:06:54.739 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-08 00:32:13.012774] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:54.739 passed 00:06:54.739 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-08 00:32:13.012794] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:54.739 passed 00:06:54.739 Test: generate copy: DIF generated, GUARD check ...passed 00:06:54.739 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:54.739 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:54.739 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:54.739 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:54.739 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:54.739 Test: generate copy: iovecs-len validate ...[2024-06-08 00:32:13.012978] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:54.739 passed 00:06:54.739 Test: generate copy: buffer alignment validate ...passed 00:06:54.739 00:06:54.739 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.739 suites 1 1 n/a 0 0 00:06:54.739 tests 26 26 26 0 0 00:06:54.739 asserts 115 115 115 0 n/a 00:06:54.739 00:06:54.739 Elapsed time = 0.002 seconds 00:06:55.001 00:06:55.001 real 0m0.361s 00:06:55.001 user 0m0.493s 00:06:55.001 sys 0m0.133s 00:06:55.001 00:32:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:55.001 00:32:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:55.001 ************************************ 00:06:55.001 END TEST accel_dif_functional_tests 00:06:55.001 ************************************ 00:06:55.001 00:06:55.001 real 0m30.118s 00:06:55.001 user 0m33.766s 00:06:55.001 sys 0m4.098s 00:06:55.001 00:32:13 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:55.001 00:32:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.001 ************************************ 00:06:55.001 END TEST accel 00:06:55.001 ************************************ 00:06:55.001 00:32:13 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:55.001 00:32:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:55.001 00:32:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:55.001 00:32:13 -- common/autotest_common.sh@10 -- # set +x 00:06:55.001 ************************************ 00:06:55.001 START TEST accel_rpc 00:06:55.001 ************************************ 00:06:55.001 00:32:13 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:55.265 * Looking for test storage... 00:06:55.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:55.265 00:32:13 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.265 00:32:13 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=207460 00:06:55.265 00:32:13 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 207460 00:06:55.265 00:32:13 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:55.265 00:32:13 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 207460 ']' 00:06:55.265 00:32:13 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.266 00:32:13 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:55.266 00:32:13 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.266 00:32:13 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:55.266 00:32:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.266 [2024-06-08 00:32:13.404449] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:55.266 [2024-06-08 00:32:13.404523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid207460 ] 00:06:55.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.266 [2024-06-08 00:32:13.468260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.266 [2024-06-08 00:32:13.544256] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.208 00:32:14 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:56.208 00:32:14 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:56.208 00:32:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:56.208 00:32:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:56.208 00:32:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:56.208 00:32:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:56.208 00:32:14 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:56.208 00:32:14 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:56.208 00:32:14 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.208 00:32:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.208 ************************************ 00:06:56.208 START TEST accel_assign_opcode 00:06:56.208 ************************************ 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:56.208 [2024-06-08 00:32:14.210224] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:56.208 [2024-06-08 00:32:14.218237] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.208 software 00:06:56.208 00:06:56.208 real 0m0.204s 00:06:56.208 user 0m0.051s 00:06:56.208 sys 0m0.010s 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.208 00:32:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:56.208 ************************************ 00:06:56.208 END TEST accel_assign_opcode 00:06:56.208 ************************************ 00:06:56.208 00:32:14 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 207460 00:06:56.208 00:32:14 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 207460 ']' 00:06:56.208 00:32:14 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 207460 00:06:56.208 00:32:14 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:06:56.208 00:32:14 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:56.208 00:32:14 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 207460 00:06:56.469 00:32:14 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:56.469 00:32:14 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:56.469 00:32:14 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 207460' 00:06:56.469 killing process with pid 207460 00:06:56.469 00:32:14 accel_rpc -- common/autotest_common.sh@968 -- # kill 207460 00:06:56.469 00:32:14 accel_rpc -- common/autotest_common.sh@973 -- # wait 207460 00:06:56.469 00:06:56.469 real 0m1.464s 00:06:56.469 user 0m1.549s 00:06:56.469 sys 0m0.411s 00:06:56.469 00:32:14 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.469 00:32:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.469 ************************************ 00:06:56.469 END TEST accel_rpc 00:06:56.469 ************************************ 00:06:56.469 00:32:14 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:56.469 00:32:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:56.469 00:32:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.469 00:32:14 -- common/autotest_common.sh@10 -- # set +x 00:06:56.731 ************************************ 00:06:56.731 START TEST app_cmdline 00:06:56.731 ************************************ 00:06:56.731 00:32:14 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:56.731 * Looking for test storage... 00:06:56.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:56.731 00:32:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:56.731 00:32:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=207785 00:06:56.731 00:32:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 207785 00:06:56.731 00:32:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:56.731 00:32:14 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 207785 ']' 00:06:56.731 00:32:14 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.731 00:32:14 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:56.731 00:32:14 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.731 00:32:14 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:56.731 00:32:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.731 [2024-06-08 00:32:14.933790] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:56.731 [2024-06-08 00:32:14.933854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid207785 ] 00:06:56.731 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.731 [2024-06-08 00:32:14.996579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.992 [2024-06-08 00:32:15.071088] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.563 00:32:15 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:57.563 00:32:15 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:06:57.563 00:32:15 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:57.563 { 00:06:57.563 "version": "SPDK v24.09-pre git sha1 e55c9a812", 00:06:57.563 "fields": { 00:06:57.563 "major": 24, 00:06:57.563 "minor": 9, 00:06:57.563 "patch": 0, 00:06:57.563 "suffix": "-pre", 00:06:57.563 "commit": "e55c9a812" 00:06:57.563 } 00:06:57.563 } 00:06:57.824 00:32:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:57.824 00:32:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:57.824 00:32:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:57.824 00:32:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:57.824 00:32:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.824 00:32:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:57.824 00:32:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:57.824 00:32:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:57.824 00:32:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:57.824 00:32:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:57.824 00:32:15 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.824 request: 00:06:57.824 { 00:06:57.824 "method": "env_dpdk_get_mem_stats", 00:06:57.824 "req_id": 1 00:06:57.824 } 00:06:57.824 Got JSON-RPC error response 00:06:57.824 response: 00:06:57.824 { 00:06:57.824 "code": -32601, 00:06:57.824 "message": "Method not found" 00:06:57.824 } 00:06:57.824 00:32:16 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:57.824 00:32:16 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:57.824 00:32:16 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:57.824 00:32:16 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:57.824 00:32:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 207785 00:06:57.824 00:32:16 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 207785 ']' 00:06:57.824 00:32:16 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 207785 00:06:57.824 00:32:16 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:06:57.824 00:32:16 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:57.824 00:32:16 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 207785 00:06:58.086 00:32:16 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:58.086 00:32:16 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:58.086 00:32:16 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 207785' 00:06:58.086 killing process with pid 207785 00:06:58.086 00:32:16 app_cmdline -- common/autotest_common.sh@968 -- # kill 207785 00:06:58.086 00:32:16 app_cmdline -- common/autotest_common.sh@973 -- # wait 207785 00:06:58.086 00:06:58.086 real 0m1.547s 00:06:58.086 user 0m1.866s 00:06:58.086 sys 0m0.389s 00:06:58.086 00:32:16 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:58.086 00:32:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.086 ************************************ 00:06:58.086 END TEST app_cmdline 00:06:58.086 ************************************ 00:06:58.347 00:32:16 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:58.347 00:32:16 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:58.347 00:32:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:58.347 00:32:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.347 ************************************ 00:06:58.347 START TEST version 00:06:58.347 ************************************ 00:06:58.347 00:32:16 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:58.347 * Looking for test storage... 00:06:58.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:58.347 00:32:16 version -- app/version.sh@17 -- # get_header_version major 00:06:58.348 00:32:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:58.348 00:32:16 version -- app/version.sh@14 -- # cut -f2 00:06:58.348 00:32:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.348 00:32:16 version -- app/version.sh@17 -- # major=24 00:06:58.348 00:32:16 version -- app/version.sh@18 -- # get_header_version minor 00:06:58.348 00:32:16 version -- app/version.sh@14 -- # cut -f2 00:06:58.348 00:32:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:58.348 00:32:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.348 00:32:16 version -- app/version.sh@18 -- # minor=9 00:06:58.348 00:32:16 version -- app/version.sh@19 -- # get_header_version patch 00:06:58.348 00:32:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:58.348 00:32:16 version -- app/version.sh@14 -- # cut -f2 00:06:58.348 00:32:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.348 00:32:16 version -- app/version.sh@19 -- # patch=0 00:06:58.348 00:32:16 version -- app/version.sh@20 -- # get_header_version suffix 00:06:58.348 00:32:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:58.348 00:32:16 version -- app/version.sh@14 -- # cut -f2 00:06:58.348 00:32:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.348 00:32:16 version -- app/version.sh@20 -- # suffix=-pre 00:06:58.348 00:32:16 version -- app/version.sh@22 -- # version=24.9 00:06:58.348 00:32:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:58.348 00:32:16 version -- app/version.sh@28 -- # version=24.9rc0 00:06:58.348 00:32:16 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:58.348 00:32:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:58.348 00:32:16 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:58.348 00:32:16 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:58.348 00:06:58.348 real 0m0.178s 00:06:58.348 user 0m0.077s 00:06:58.348 sys 0m0.140s 00:06:58.348 00:32:16 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:58.348 00:32:16 version -- common/autotest_common.sh@10 -- # set +x 00:06:58.348 ************************************ 00:06:58.348 END TEST version 00:06:58.348 ************************************ 00:06:58.348 00:32:16 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:58.348 00:32:16 -- spdk/autotest.sh@198 -- # uname -s 00:06:58.610 00:32:16 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:58.610 00:32:16 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:58.610 00:32:16 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:58.610 00:32:16 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:58.610 00:32:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:58.610 00:32:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:58.610 00:32:16 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:58.610 00:32:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.610 00:32:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:58.610 00:32:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:58.610 00:32:16 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:58.610 00:32:16 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:58.610 00:32:16 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:58.610 00:32:16 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:58.610 00:32:16 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:58.610 00:32:16 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:58.610 00:32:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:58.610 00:32:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.610 ************************************ 00:06:58.610 START TEST nvmf_tcp 00:06:58.610 ************************************ 00:06:58.610 00:32:16 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:58.610 * Looking for test storage... 00:06:58.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:58.610 00:32:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:58.610 00:32:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:58.610 00:32:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.610 00:32:16 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:58.610 00:32:16 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.610 00:32:16 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.610 00:32:16 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.610 00:32:16 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.610 00:32:16 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.611 00:32:16 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.611 00:32:16 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.611 00:32:16 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.611 00:32:16 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.611 00:32:16 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.611 00:32:16 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.611 00:32:16 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:58.611 00:32:16 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:58.611 00:32:16 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:58.611 00:32:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:58.611 00:32:16 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:58.611 00:32:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:58.611 00:32:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:58.611 00:32:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.873 ************************************ 00:06:58.873 START TEST nvmf_example 00:06:58.873 ************************************ 00:06:58.873 00:32:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:58.873 * Looking for test storage... 00:06:58.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.873 00:32:17 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.874 00:32:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:05.462 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.462 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:05.463 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:05.463 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:05.463 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.463 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:05.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:07:05.724 00:07:05.724 --- 10.0.0.2 ping statistics --- 00:07:05.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.724 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:07:05.724 00:07:05.724 --- 10.0.0.1 ping statistics --- 00:07:05.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.724 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:05.724 00:32:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=212100 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 212100 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 212100 ']' 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:05.986 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:05.986 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.929 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:06.930 00:32:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:06.930 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.165 Initializing NVMe Controllers 00:07:19.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:19.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:19.165 Initialization complete. Launching workers. 00:07:19.165 ======================================================== 00:07:19.165 Latency(us) 00:07:19.165 Device Information : IOPS MiB/s Average min max 00:07:19.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17473.56 68.26 3662.32 777.43 20112.39 00:07:19.165 ======================================================== 00:07:19.165 Total : 17473.56 68.26 3662.32 777.43 20112.39 00:07:19.165 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:19.165 rmmod nvme_tcp 00:07:19.165 rmmod nvme_fabrics 00:07:19.165 rmmod nvme_keyring 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 212100 ']' 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 212100 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 212100 ']' 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 212100 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 212100 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 212100' 00:07:19.165 killing process with pid 212100 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 212100 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 212100 00:07:19.165 nvmf threads initialize successfully 00:07:19.165 bdev subsystem init successfully 00:07:19.165 created a nvmf target service 00:07:19.165 create targets's poll groups done 00:07:19.165 all subsystems of target started 00:07:19.165 nvmf target is running 00:07:19.165 all subsystems of target stopped 00:07:19.165 destroy targets's poll groups done 00:07:19.165 destroyed the nvmf target service 00:07:19.165 bdev subsystem finish successfully 00:07:19.165 nvmf threads destroy successfully 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.165 00:32:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.426 00:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:19.426 00:32:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:19.426 00:32:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:19.426 00:32:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.426 00:07:19.426 real 0m20.696s 00:07:19.426 user 0m46.664s 00:07:19.426 sys 0m6.101s 00:07:19.426 00:32:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:19.426 00:32:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.426 ************************************ 00:07:19.426 END TEST nvmf_example 00:07:19.426 ************************************ 00:07:19.426 00:32:37 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:19.427 00:32:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:19.427 00:32:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:19.427 00:32:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.427 ************************************ 00:07:19.427 START TEST nvmf_filesystem 00:07:19.427 ************************************ 00:07:19.427 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:19.691 * Looking for test storage... 00:07:19.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:19.691 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:19.692 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:19.692 #define SPDK_CONFIG_H 00:07:19.692 #define SPDK_CONFIG_APPS 1 00:07:19.692 #define SPDK_CONFIG_ARCH native 00:07:19.692 #undef SPDK_CONFIG_ASAN 00:07:19.692 #undef SPDK_CONFIG_AVAHI 00:07:19.692 #undef SPDK_CONFIG_CET 00:07:19.692 #define SPDK_CONFIG_COVERAGE 1 00:07:19.692 #define SPDK_CONFIG_CROSS_PREFIX 00:07:19.692 #undef SPDK_CONFIG_CRYPTO 00:07:19.692 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:19.692 #undef SPDK_CONFIG_CUSTOMOCF 00:07:19.692 #undef SPDK_CONFIG_DAOS 00:07:19.692 #define SPDK_CONFIG_DAOS_DIR 00:07:19.692 #define SPDK_CONFIG_DEBUG 1 00:07:19.692 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:19.692 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:19.692 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:19.692 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:19.692 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:19.692 #undef SPDK_CONFIG_DPDK_UADK 00:07:19.692 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:19.692 #define SPDK_CONFIG_EXAMPLES 1 00:07:19.692 #undef SPDK_CONFIG_FC 00:07:19.692 #define SPDK_CONFIG_FC_PATH 00:07:19.692 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:19.692 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:19.692 #undef SPDK_CONFIG_FUSE 00:07:19.692 #undef SPDK_CONFIG_FUZZER 00:07:19.692 #define SPDK_CONFIG_FUZZER_LIB 00:07:19.692 #undef SPDK_CONFIG_GOLANG 00:07:19.692 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:19.692 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:19.692 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:19.692 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:19.692 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:19.692 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:19.692 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:19.692 #define SPDK_CONFIG_IDXD 1 00:07:19.692 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:19.692 #undef SPDK_CONFIG_IPSEC_MB 00:07:19.692 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:19.692 #define SPDK_CONFIG_ISAL 1 00:07:19.692 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:19.692 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:19.692 #define SPDK_CONFIG_LIBDIR 00:07:19.692 #undef SPDK_CONFIG_LTO 00:07:19.692 #define SPDK_CONFIG_MAX_LCORES 00:07:19.692 #define SPDK_CONFIG_NVME_CUSE 1 00:07:19.692 #undef SPDK_CONFIG_OCF 00:07:19.692 #define SPDK_CONFIG_OCF_PATH 00:07:19.692 #define SPDK_CONFIG_OPENSSL_PATH 00:07:19.692 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:19.692 #define SPDK_CONFIG_PGO_DIR 00:07:19.692 #undef SPDK_CONFIG_PGO_USE 00:07:19.692 #define SPDK_CONFIG_PREFIX /usr/local 00:07:19.692 #undef SPDK_CONFIG_RAID5F 00:07:19.692 #undef SPDK_CONFIG_RBD 00:07:19.692 #define SPDK_CONFIG_RDMA 1 00:07:19.692 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:19.692 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:19.693 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:19.693 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:19.693 #define SPDK_CONFIG_SHARED 1 00:07:19.693 #undef SPDK_CONFIG_SMA 00:07:19.693 #define SPDK_CONFIG_TESTS 1 00:07:19.693 #undef SPDK_CONFIG_TSAN 00:07:19.693 #define SPDK_CONFIG_UBLK 1 00:07:19.693 #define SPDK_CONFIG_UBSAN 1 00:07:19.693 #undef SPDK_CONFIG_UNIT_TESTS 00:07:19.693 #undef SPDK_CONFIG_URING 00:07:19.693 #define SPDK_CONFIG_URING_PATH 00:07:19.693 #undef SPDK_CONFIG_URING_ZNS 00:07:19.693 #undef SPDK_CONFIG_USDT 00:07:19.693 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:19.693 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:19.693 #undef SPDK_CONFIG_VFIO_USER 00:07:19.693 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:19.693 #define SPDK_CONFIG_VHOST 1 00:07:19.693 #define SPDK_CONFIG_VIRTIO 1 00:07:19.693 #undef SPDK_CONFIG_VTUNE 00:07:19.693 #define SPDK_CONFIG_VTUNE_DIR 00:07:19.693 #define SPDK_CONFIG_WERROR 1 00:07:19.693 #define SPDK_CONFIG_WPDK_DIR 00:07:19.693 #undef SPDK_CONFIG_XNVME 00:07:19.693 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:19.693 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.694 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 214942 ]] 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 214942 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.jILTkI 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.jILTkI/tests/target /tmp/spdk.jILTkI 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956665856 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327763968 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:19.695 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118753320960 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10617659392 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864499200 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684466176 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1024000 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:19.696 * Looking for test storage... 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118753320960 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12832251904 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.696 00:32:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.697 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.958 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:19.958 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:19.958 00:32:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:19.958 00:32:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:26.553 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:26.554 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:26.554 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:26.554 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:26.554 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.554 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.816 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.816 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.816 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:26.816 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.816 00:32:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:26.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.750 ms 00:07:26.816 00:07:26.816 --- 10.0.0.2 ping statistics --- 00:07:26.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.816 rtt min/avg/max/mdev = 0.750/0.750/0.750/0.000 ms 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:07:26.816 00:07:26.816 --- 10.0.0.1 ping statistics --- 00:07:26.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.816 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:26.816 00:32:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.078 ************************************ 00:07:27.078 START TEST nvmf_filesystem_no_in_capsule 00:07:27.078 ************************************ 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=218595 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 218595 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 218595 ']' 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:27.078 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.078 [2024-06-08 00:32:45.184066] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:27.078 [2024-06-08 00:32:45.184122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.078 [2024-06-08 00:32:45.254051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.078 [2024-06-08 00:32:45.331048] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.078 [2024-06-08 00:32:45.331085] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.078 [2024-06-08 00:32:45.331092] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.078 [2024-06-08 00:32:45.331099] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.078 [2024-06-08 00:32:45.331105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.078 [2024-06-08 00:32:45.331246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.078 [2024-06-08 00:32:45.331383] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.078 [2024-06-08 00:32:45.331543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.078 [2024-06-08 00:32:45.331543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.753 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:27.753 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:27.753 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.753 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:27.753 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.753 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.753 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:27.753 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:27.753 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.753 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.753 [2024-06-08 00:32:46.009055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.753 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.753 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:27.753 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.753 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.014 Malloc1 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.014 [2024-06-08 00:32:46.137722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.014 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:28.014 { 00:07:28.014 "name": "Malloc1", 00:07:28.014 "aliases": [ 00:07:28.014 "4e13f072-db1b-4942-a94e-05f02f23da6b" 00:07:28.014 ], 00:07:28.014 "product_name": "Malloc disk", 00:07:28.014 "block_size": 512, 00:07:28.014 "num_blocks": 1048576, 00:07:28.014 "uuid": "4e13f072-db1b-4942-a94e-05f02f23da6b", 00:07:28.014 "assigned_rate_limits": { 00:07:28.014 "rw_ios_per_sec": 0, 00:07:28.015 "rw_mbytes_per_sec": 0, 00:07:28.015 "r_mbytes_per_sec": 0, 00:07:28.015 "w_mbytes_per_sec": 0 00:07:28.015 }, 00:07:28.015 "claimed": true, 00:07:28.015 "claim_type": "exclusive_write", 00:07:28.015 "zoned": false, 00:07:28.015 "supported_io_types": { 00:07:28.015 "read": true, 00:07:28.015 "write": true, 00:07:28.015 "unmap": true, 00:07:28.015 "write_zeroes": true, 00:07:28.015 "flush": true, 00:07:28.015 "reset": true, 00:07:28.015 "compare": false, 00:07:28.015 "compare_and_write": false, 00:07:28.015 "abort": true, 00:07:28.015 "nvme_admin": false, 00:07:28.015 "nvme_io": false 00:07:28.015 }, 00:07:28.015 "memory_domains": [ 00:07:28.015 { 00:07:28.015 "dma_device_id": "system", 00:07:28.015 "dma_device_type": 1 00:07:28.015 }, 00:07:28.015 { 00:07:28.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.015 "dma_device_type": 2 00:07:28.015 } 00:07:28.015 ], 00:07:28.015 "driver_specific": {} 00:07:28.015 } 00:07:28.015 ]' 00:07:28.015 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:28.015 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:28.015 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:28.015 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:28.015 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:28.015 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:28.015 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:28.015 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.931 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.931 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:29.931 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.931 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:29.931 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:31.843 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:31.843 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:31.843 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.843 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:31.843 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.843 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:31.843 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:31.843 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:31.843 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:31.844 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:31.844 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:31.844 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:31.844 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:31.844 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:31.844 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:31.844 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:31.844 00:32:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:32.157 00:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:32.419 00:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.361 ************************************ 00:07:33.361 START TEST filesystem_ext4 00:07:33.361 ************************************ 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:33.361 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:33.361 mke2fs 1.46.5 (30-Dec-2021) 00:07:33.622 Discarding device blocks: 0/522240 done 00:07:33.622 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:33.622 Filesystem UUID: b27a853f-aa47-4e8d-b5e2-2f3a3788101f 00:07:33.622 Superblock backups stored on blocks: 00:07:33.622 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:33.622 00:07:33.622 Allocating group tables: 0/64 done 00:07:33.622 Writing inode tables: 0/64 done 00:07:33.622 Creating journal (8192 blocks): done 00:07:33.622 Writing superblocks and filesystem accounting information: 0/64 done 00:07:33.622 00:07:33.622 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:33.622 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.882 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.882 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:33.882 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.882 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:33.882 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:33.882 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.882 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 218595 00:07:33.882 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.883 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.883 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.883 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:34.143 00:07:34.143 real 0m0.544s 00:07:34.143 user 0m0.034s 00:07:34.143 sys 0m0.064s 00:07:34.143 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:34.143 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:34.143 ************************************ 00:07:34.143 END TEST filesystem_ext4 00:07:34.143 ************************************ 00:07:34.143 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:34.143 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:34.143 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:34.143 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.144 ************************************ 00:07:34.144 START TEST filesystem_btrfs 00:07:34.144 ************************************ 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:34.144 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:34.716 btrfs-progs v6.6.2 00:07:34.716 See https://btrfs.readthedocs.io for more information. 00:07:34.716 00:07:34.716 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:34.716 NOTE: several default settings have changed in version 5.15, please make sure 00:07:34.716 this does not affect your deployments: 00:07:34.716 - DUP for metadata (-m dup) 00:07:34.716 - enabled no-holes (-O no-holes) 00:07:34.716 - enabled free-space-tree (-R free-space-tree) 00:07:34.716 00:07:34.716 Label: (null) 00:07:34.716 UUID: a4d7dde8-5bf2-4175-8741-c6bf5f616b3a 00:07:34.716 Node size: 16384 00:07:34.716 Sector size: 4096 00:07:34.716 Filesystem size: 510.00MiB 00:07:34.716 Block group profiles: 00:07:34.716 Data: single 8.00MiB 00:07:34.716 Metadata: DUP 32.00MiB 00:07:34.716 System: DUP 8.00MiB 00:07:34.716 SSD detected: yes 00:07:34.716 Zoned device: no 00:07:34.716 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:34.716 Runtime features: free-space-tree 00:07:34.716 Checksum: crc32c 00:07:34.716 Number of devices: 1 00:07:34.716 Devices: 00:07:34.716 ID SIZE PATH 00:07:34.716 1 510.00MiB /dev/nvme0n1p1 00:07:34.716 00:07:34.716 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:34.716 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 218595 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:34.980 00:07:34.980 real 0m0.869s 00:07:34.980 user 0m0.023s 00:07:34.980 sys 0m0.134s 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:34.980 ************************************ 00:07:34.980 END TEST filesystem_btrfs 00:07:34.980 ************************************ 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.980 ************************************ 00:07:34.980 START TEST filesystem_xfs 00:07:34.980 ************************************ 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:34.980 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:34.981 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:34.981 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:07:34.981 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:34.981 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:34.981 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:34.981 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:34.981 = sectsz=512 attr=2, projid32bit=1 00:07:34.981 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:34.981 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:34.981 data = bsize=4096 blocks=130560, imaxpct=25 00:07:34.981 = sunit=0 swidth=0 blks 00:07:34.981 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:34.981 log =internal log bsize=4096 blocks=16384, version=2 00:07:34.981 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:34.981 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:36.363 Discarding blocks...Done. 00:07:36.363 00:32:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:36.363 00:32:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 218595 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.277 00:07:38.277 real 0m3.017s 00:07:38.277 user 0m0.024s 00:07:38.277 sys 0m0.077s 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:38.277 ************************************ 00:07:38.277 END TEST filesystem_xfs 00:07:38.277 ************************************ 00:07:38.277 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:38.538 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:38.798 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:39.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 218595 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 218595 ']' 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 218595 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 218595 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 218595' 00:07:39.059 killing process with pid 218595 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 218595 00:07:39.059 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 218595 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:39.320 00:07:39.320 real 0m12.308s 00:07:39.320 user 0m48.469s 00:07:39.320 sys 0m1.211s 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.320 ************************************ 00:07:39.320 END TEST nvmf_filesystem_no_in_capsule 00:07:39.320 ************************************ 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.320 ************************************ 00:07:39.320 START TEST nvmf_filesystem_in_capsule 00:07:39.320 ************************************ 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=221184 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 221184 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 221184 ']' 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:39.320 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.320 [2024-06-08 00:32:57.569287] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:39.320 [2024-06-08 00:32:57.569333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.320 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.581 [2024-06-08 00:32:57.634315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.582 [2024-06-08 00:32:57.700537] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.582 [2024-06-08 00:32:57.700571] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.582 [2024-06-08 00:32:57.700578] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.582 [2024-06-08 00:32:57.700584] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.582 [2024-06-08 00:32:57.700590] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.582 [2024-06-08 00:32:57.700725] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.582 [2024-06-08 00:32:57.700838] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.582 [2024-06-08 00:32:57.700992] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.582 [2024-06-08 00:32:57.700993] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.153 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:40.153 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:40.153 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:40.153 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:40.153 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.154 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.154 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:40.154 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:40.154 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.154 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.154 [2024-06-08 00:32:58.392024] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.154 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.154 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:40.154 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.154 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.415 Malloc1 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.415 [2024-06-08 00:32:58.522698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:40.415 { 00:07:40.415 "name": "Malloc1", 00:07:40.415 "aliases": [ 00:07:40.415 "7fe13675-508c-4d16-89d5-3420769df94f" 00:07:40.415 ], 00:07:40.415 "product_name": "Malloc disk", 00:07:40.415 "block_size": 512, 00:07:40.415 "num_blocks": 1048576, 00:07:40.415 "uuid": "7fe13675-508c-4d16-89d5-3420769df94f", 00:07:40.415 "assigned_rate_limits": { 00:07:40.415 "rw_ios_per_sec": 0, 00:07:40.415 "rw_mbytes_per_sec": 0, 00:07:40.415 "r_mbytes_per_sec": 0, 00:07:40.415 "w_mbytes_per_sec": 0 00:07:40.415 }, 00:07:40.415 "claimed": true, 00:07:40.415 "claim_type": "exclusive_write", 00:07:40.415 "zoned": false, 00:07:40.415 "supported_io_types": { 00:07:40.415 "read": true, 00:07:40.415 "write": true, 00:07:40.415 "unmap": true, 00:07:40.415 "write_zeroes": true, 00:07:40.415 "flush": true, 00:07:40.415 "reset": true, 00:07:40.415 "compare": false, 00:07:40.415 "compare_and_write": false, 00:07:40.415 "abort": true, 00:07:40.415 "nvme_admin": false, 00:07:40.415 "nvme_io": false 00:07:40.415 }, 00:07:40.415 "memory_domains": [ 00:07:40.415 { 00:07:40.415 "dma_device_id": "system", 00:07:40.415 "dma_device_type": 1 00:07:40.415 }, 00:07:40.415 { 00:07:40.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.415 "dma_device_type": 2 00:07:40.415 } 00:07:40.415 ], 00:07:40.415 "driver_specific": {} 00:07:40.415 } 00:07:40.415 ]' 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:40.415 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:42.327 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:42.327 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:42.327 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:42.327 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:42.327 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:44.242 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:44.540 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:44.799 00:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:45.739 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:45.739 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.740 ************************************ 00:07:45.740 START TEST filesystem_in_capsule_ext4 00:07:45.740 ************************************ 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:45.740 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:45.740 mke2fs 1.46.5 (30-Dec-2021) 00:07:46.000 Discarding device blocks: 0/522240 done 00:07:46.000 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:46.000 Filesystem UUID: c678eefd-13f9-4b20-a610-a7964542c218 00:07:46.000 Superblock backups stored on blocks: 00:07:46.000 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:46.000 00:07:46.000 Allocating group tables: 0/64 done 00:07:46.000 Writing inode tables: 0/64 done 00:07:46.000 Creating journal (8192 blocks): done 00:07:46.941 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:07:46.941 00:07:46.941 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:46.941 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 221184 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.882 00:07:47.882 real 0m2.014s 00:07:47.882 user 0m0.031s 00:07:47.882 sys 0m0.064s 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:47.882 00:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:47.882 ************************************ 00:07:47.882 END TEST filesystem_in_capsule_ext4 00:07:47.882 ************************************ 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.882 ************************************ 00:07:47.882 START TEST filesystem_in_capsule_btrfs 00:07:47.882 ************************************ 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:47.882 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:48.453 btrfs-progs v6.6.2 00:07:48.453 See https://btrfs.readthedocs.io for more information. 00:07:48.453 00:07:48.453 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:48.453 NOTE: several default settings have changed in version 5.15, please make sure 00:07:48.453 this does not affect your deployments: 00:07:48.453 - DUP for metadata (-m dup) 00:07:48.453 - enabled no-holes (-O no-holes) 00:07:48.453 - enabled free-space-tree (-R free-space-tree) 00:07:48.453 00:07:48.453 Label: (null) 00:07:48.453 UUID: 16813aa6-6a0f-4e19-bcd2-00e2f16e2c15 00:07:48.453 Node size: 16384 00:07:48.453 Sector size: 4096 00:07:48.453 Filesystem size: 510.00MiB 00:07:48.453 Block group profiles: 00:07:48.453 Data: single 8.00MiB 00:07:48.453 Metadata: DUP 32.00MiB 00:07:48.453 System: DUP 8.00MiB 00:07:48.453 SSD detected: yes 00:07:48.453 Zoned device: no 00:07:48.453 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:48.453 Runtime features: free-space-tree 00:07:48.453 Checksum: crc32c 00:07:48.453 Number of devices: 1 00:07:48.453 Devices: 00:07:48.453 ID SIZE PATH 00:07:48.453 1 510.00MiB /dev/nvme0n1p1 00:07:48.453 00:07:48.453 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:48.453 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 221184 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.714 00:07:48.714 real 0m0.905s 00:07:48.714 user 0m0.028s 00:07:48.714 sys 0m0.133s 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:48.714 ************************************ 00:07:48.714 END TEST filesystem_in_capsule_btrfs 00:07:48.714 ************************************ 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:48.714 00:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.974 ************************************ 00:07:48.974 START TEST filesystem_in_capsule_xfs 00:07:48.974 ************************************ 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:48.974 00:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:48.974 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:48.974 = sectsz=512 attr=2, projid32bit=1 00:07:48.974 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:48.974 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:48.974 data = bsize=4096 blocks=130560, imaxpct=25 00:07:48.974 = sunit=0 swidth=0 blks 00:07:48.974 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:48.974 log =internal log bsize=4096 blocks=16384, version=2 00:07:48.974 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:48.974 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:49.915 Discarding blocks...Done. 00:07:49.915 00:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:49.915 00:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 221184 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.457 00:07:52.457 real 0m3.211s 00:07:52.457 user 0m0.026s 00:07:52.457 sys 0m0.078s 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:52.457 ************************************ 00:07:52.457 END TEST filesystem_in_capsule_xfs 00:07:52.457 ************************************ 00:07:52.457 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 221184 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 221184 ']' 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 221184 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:52.458 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 221184 00:07:52.718 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:52.718 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:52.718 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 221184' 00:07:52.718 killing process with pid 221184 00:07:52.718 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 221184 00:07:52.718 00:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 221184 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:52.978 00:07:52.978 real 0m13.507s 00:07:52.978 user 0m53.273s 00:07:52.978 sys 0m1.217s 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.978 ************************************ 00:07:52.978 END TEST nvmf_filesystem_in_capsule 00:07:52.978 ************************************ 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.978 rmmod nvme_tcp 00:07:52.978 rmmod nvme_fabrics 00:07:52.978 rmmod nvme_keyring 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.978 00:33:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.523 00:33:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:55.523 00:07:55.523 real 0m35.516s 00:07:55.523 user 1m43.854s 00:07:55.523 sys 0m7.957s 00:07:55.523 00:33:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:55.523 00:33:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.523 ************************************ 00:07:55.523 END TEST nvmf_filesystem 00:07:55.523 ************************************ 00:07:55.523 00:33:13 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:55.523 00:33:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:55.523 00:33:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:55.523 00:33:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.523 ************************************ 00:07:55.523 START TEST nvmf_target_discovery 00:07:55.523 ************************************ 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:55.523 * Looking for test storage... 00:07:55.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:55.523 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.524 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.111 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:02.112 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:02.112 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:02.112 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:02.112 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.112 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:08:02.373 00:08:02.373 --- 10.0.0.2 ping statistics --- 00:08:02.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.373 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:08:02.373 00:08:02.373 --- 10.0.0.1 ping statistics --- 00:08:02.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.373 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=228752 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 228752 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 228752 ']' 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:02.373 00:33:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.373 [2024-06-08 00:33:20.590880] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:08:02.373 [2024-06-08 00:33:20.590943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.373 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.634 [2024-06-08 00:33:20.660928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.634 [2024-06-08 00:33:20.736172] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.634 [2024-06-08 00:33:20.736210] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.634 [2024-06-08 00:33:20.736217] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.634 [2024-06-08 00:33:20.736224] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.634 [2024-06-08 00:33:20.736230] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.634 [2024-06-08 00:33:20.736366] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.634 [2024-06-08 00:33:20.736515] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.634 [2024-06-08 00:33:20.736578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.634 [2024-06-08 00:33:20.736580] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.205 [2024-06-08 00:33:21.417974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.205 Null1 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.205 [2024-06-08 00:33:21.478273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.205 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.466 Null2 00:08:03.466 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.466 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:03.466 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.466 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 Null3 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 Null4 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:03.467 00:08:03.467 Discovery Log Number of Records 6, Generation counter 6 00:08:03.467 =====Discovery Log Entry 0====== 00:08:03.467 trtype: tcp 00:08:03.467 adrfam: ipv4 00:08:03.467 subtype: current discovery subsystem 00:08:03.467 treq: not required 00:08:03.467 portid: 0 00:08:03.467 trsvcid: 4420 00:08:03.467 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:03.467 traddr: 10.0.0.2 00:08:03.467 eflags: explicit discovery connections, duplicate discovery information 00:08:03.467 sectype: none 00:08:03.467 =====Discovery Log Entry 1====== 00:08:03.467 trtype: tcp 00:08:03.467 adrfam: ipv4 00:08:03.467 subtype: nvme subsystem 00:08:03.467 treq: not required 00:08:03.467 portid: 0 00:08:03.467 trsvcid: 4420 00:08:03.467 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:03.467 traddr: 10.0.0.2 00:08:03.467 eflags: none 00:08:03.467 sectype: none 00:08:03.467 =====Discovery Log Entry 2====== 00:08:03.467 trtype: tcp 00:08:03.467 adrfam: ipv4 00:08:03.467 subtype: nvme subsystem 00:08:03.467 treq: not required 00:08:03.467 portid: 0 00:08:03.467 trsvcid: 4420 00:08:03.467 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:03.467 traddr: 10.0.0.2 00:08:03.467 eflags: none 00:08:03.467 sectype: none 00:08:03.467 =====Discovery Log Entry 3====== 00:08:03.467 trtype: tcp 00:08:03.467 adrfam: ipv4 00:08:03.467 subtype: nvme subsystem 00:08:03.467 treq: not required 00:08:03.467 portid: 0 00:08:03.467 trsvcid: 4420 00:08:03.467 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:03.467 traddr: 10.0.0.2 00:08:03.467 eflags: none 00:08:03.467 sectype: none 00:08:03.467 =====Discovery Log Entry 4====== 00:08:03.467 trtype: tcp 00:08:03.467 adrfam: ipv4 00:08:03.467 subtype: nvme subsystem 00:08:03.467 treq: not required 00:08:03.467 portid: 0 00:08:03.467 trsvcid: 4420 00:08:03.467 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:03.467 traddr: 10.0.0.2 00:08:03.467 eflags: none 00:08:03.467 sectype: none 00:08:03.467 =====Discovery Log Entry 5====== 00:08:03.467 trtype: tcp 00:08:03.467 adrfam: ipv4 00:08:03.467 subtype: discovery subsystem referral 00:08:03.467 treq: not required 00:08:03.467 portid: 0 00:08:03.467 trsvcid: 4430 00:08:03.467 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:03.467 traddr: 10.0.0.2 00:08:03.467 eflags: none 00:08:03.467 sectype: none 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:03.467 Perform nvmf subsystem discovery via RPC 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.467 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.467 [ 00:08:03.467 { 00:08:03.467 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:03.467 "subtype": "Discovery", 00:08:03.467 "listen_addresses": [ 00:08:03.467 { 00:08:03.467 "trtype": "TCP", 00:08:03.467 "adrfam": "IPv4", 00:08:03.467 "traddr": "10.0.0.2", 00:08:03.467 "trsvcid": "4420" 00:08:03.467 } 00:08:03.467 ], 00:08:03.467 "allow_any_host": true, 00:08:03.467 "hosts": [] 00:08:03.467 }, 00:08:03.467 { 00:08:03.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.467 "subtype": "NVMe", 00:08:03.467 "listen_addresses": [ 00:08:03.467 { 00:08:03.467 "trtype": "TCP", 00:08:03.467 "adrfam": "IPv4", 00:08:03.467 "traddr": "10.0.0.2", 00:08:03.467 "trsvcid": "4420" 00:08:03.467 } 00:08:03.467 ], 00:08:03.467 "allow_any_host": true, 00:08:03.467 "hosts": [], 00:08:03.467 "serial_number": "SPDK00000000000001", 00:08:03.467 "model_number": "SPDK bdev Controller", 00:08:03.467 "max_namespaces": 32, 00:08:03.467 "min_cntlid": 1, 00:08:03.467 "max_cntlid": 65519, 00:08:03.467 "namespaces": [ 00:08:03.467 { 00:08:03.467 "nsid": 1, 00:08:03.467 "bdev_name": "Null1", 00:08:03.467 "name": "Null1", 00:08:03.467 "nguid": "1A99D9CF6DCB492FAEE0898DF6D0E9A2", 00:08:03.467 "uuid": "1a99d9cf-6dcb-492f-aee0-898df6d0e9a2" 00:08:03.467 } 00:08:03.467 ] 00:08:03.467 }, 00:08:03.467 { 00:08:03.728 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:03.728 "subtype": "NVMe", 00:08:03.728 "listen_addresses": [ 00:08:03.728 { 00:08:03.728 "trtype": "TCP", 00:08:03.728 "adrfam": "IPv4", 00:08:03.728 "traddr": "10.0.0.2", 00:08:03.728 "trsvcid": "4420" 00:08:03.728 } 00:08:03.728 ], 00:08:03.728 "allow_any_host": true, 00:08:03.728 "hosts": [], 00:08:03.728 "serial_number": "SPDK00000000000002", 00:08:03.728 "model_number": "SPDK bdev Controller", 00:08:03.728 "max_namespaces": 32, 00:08:03.728 "min_cntlid": 1, 00:08:03.728 "max_cntlid": 65519, 00:08:03.728 "namespaces": [ 00:08:03.728 { 00:08:03.728 "nsid": 1, 00:08:03.728 "bdev_name": "Null2", 00:08:03.728 "name": "Null2", 00:08:03.728 "nguid": "BFAFAE2B3E464EDB8AB108DE451ACF4C", 00:08:03.728 "uuid": "bfafae2b-3e46-4edb-8ab1-08de451acf4c" 00:08:03.728 } 00:08:03.728 ] 00:08:03.728 }, 00:08:03.728 { 00:08:03.728 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:03.728 "subtype": "NVMe", 00:08:03.728 "listen_addresses": [ 00:08:03.728 { 00:08:03.728 "trtype": "TCP", 00:08:03.728 "adrfam": "IPv4", 00:08:03.728 "traddr": "10.0.0.2", 00:08:03.728 "trsvcid": "4420" 00:08:03.728 } 00:08:03.728 ], 00:08:03.728 "allow_any_host": true, 00:08:03.728 "hosts": [], 00:08:03.728 "serial_number": "SPDK00000000000003", 00:08:03.728 "model_number": "SPDK bdev Controller", 00:08:03.728 "max_namespaces": 32, 00:08:03.728 "min_cntlid": 1, 00:08:03.728 "max_cntlid": 65519, 00:08:03.728 "namespaces": [ 00:08:03.728 { 00:08:03.728 "nsid": 1, 00:08:03.728 "bdev_name": "Null3", 00:08:03.728 "name": "Null3", 00:08:03.728 "nguid": "3F5648A58C1846D78C40F3C07EC48ED0", 00:08:03.728 "uuid": "3f5648a5-8c18-46d7-8c40-f3c07ec48ed0" 00:08:03.728 } 00:08:03.728 ] 00:08:03.728 }, 00:08:03.728 { 00:08:03.728 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:03.728 "subtype": "NVMe", 00:08:03.728 "listen_addresses": [ 00:08:03.728 { 00:08:03.728 "trtype": "TCP", 00:08:03.728 "adrfam": "IPv4", 00:08:03.728 "traddr": "10.0.0.2", 00:08:03.728 "trsvcid": "4420" 00:08:03.728 } 00:08:03.728 ], 00:08:03.728 "allow_any_host": true, 00:08:03.728 "hosts": [], 00:08:03.728 "serial_number": "SPDK00000000000004", 00:08:03.728 "model_number": "SPDK bdev Controller", 00:08:03.728 "max_namespaces": 32, 00:08:03.728 "min_cntlid": 1, 00:08:03.728 "max_cntlid": 65519, 00:08:03.728 "namespaces": [ 00:08:03.728 { 00:08:03.728 "nsid": 1, 00:08:03.728 "bdev_name": "Null4", 00:08:03.728 "name": "Null4", 00:08:03.728 "nguid": "410A0F0C563D4A3C98D24D0B8CF97C29", 00:08:03.728 "uuid": "410a0f0c-563d-4a3c-98d2-4d0b8cf97c29" 00:08:03.728 } 00:08:03.728 ] 00:08:03.728 } 00:08:03.728 ] 00:08:03.728 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.728 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:03.729 rmmod nvme_tcp 00:08:03.729 rmmod nvme_fabrics 00:08:03.729 rmmod nvme_keyring 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 228752 ']' 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 228752 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 228752 ']' 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 228752 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:03.729 00:33:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 228752 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 228752' 00:08:03.990 killing process with pid 228752 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 228752 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 228752 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.990 00:33:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.535 00:33:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:06.535 00:08:06.535 real 0m10.954s 00:08:06.535 user 0m7.925s 00:08:06.535 sys 0m5.602s 00:08:06.535 00:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:06.535 00:33:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.535 ************************************ 00:08:06.535 END TEST nvmf_target_discovery 00:08:06.535 ************************************ 00:08:06.535 00:33:24 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:06.535 00:33:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:06.535 00:33:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:06.535 00:33:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.535 ************************************ 00:08:06.535 START TEST nvmf_referrals 00:08:06.535 ************************************ 00:08:06.535 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:06.535 * Looking for test storage... 00:08:06.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.535 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.535 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:06.535 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.535 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.535 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.535 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.535 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.535 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:06.536 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:13.123 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.123 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:13.123 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:13.124 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:13.124 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.124 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:08:13.441 00:08:13.441 --- 10.0.0.2 ping statistics --- 00:08:13.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.441 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:08:13.441 00:08:13.441 --- 10.0.0.1 ping statistics --- 00:08:13.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.441 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=233307 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 233307 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 233307 ']' 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:13.441 00:33:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.441 [2024-06-08 00:33:31.551769] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:08:13.441 [2024-06-08 00:33:31.551829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.441 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.441 [2024-06-08 00:33:31.620399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.441 [2024-06-08 00:33:31.685279] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.441 [2024-06-08 00:33:31.685317] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.441 [2024-06-08 00:33:31.685325] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.441 [2024-06-08 00:33:31.685331] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.441 [2024-06-08 00:33:31.685337] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.441 [2024-06-08 00:33:31.685476] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.441 [2024-06-08 00:33:31.685597] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.441 [2024-06-08 00:33:31.685715] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.441 [2024-06-08 00:33:31.685716] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.051 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:14.051 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:08:14.052 00:33:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.052 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:14.052 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.312 [2024-06-08 00:33:32.376070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.312 [2024-06-08 00:33:32.392277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.312 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.313 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.574 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.836 00:33:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.836 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.096 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.355 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:15.616 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:15.616 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:15.616 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:15.616 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:15.616 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:15.616 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.616 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:15.877 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:15.877 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:15.877 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:15.877 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:15.877 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.877 00:33:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.877 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.138 rmmod nvme_tcp 00:08:16.138 rmmod nvme_fabrics 00:08:16.138 rmmod nvme_keyring 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 233307 ']' 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 233307 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 233307 ']' 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 233307 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 233307 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 233307' 00:08:16.138 killing process with pid 233307 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 233307 00:08:16.138 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 233307 00:08:16.398 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.398 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.398 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.398 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.398 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.398 00:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.398 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.398 00:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.941 00:33:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.941 00:08:18.941 real 0m12.311s 00:08:18.941 user 0m14.309s 00:08:18.941 sys 0m5.899s 00:08:18.941 00:33:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:18.941 00:33:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.941 ************************************ 00:08:18.941 END TEST nvmf_referrals 00:08:18.941 ************************************ 00:08:18.941 00:33:36 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:18.941 00:33:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:18.941 00:33:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:18.941 00:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.941 ************************************ 00:08:18.941 START TEST nvmf_connect_disconnect 00:08:18.941 ************************************ 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:18.941 * Looking for test storage... 00:08:18.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.941 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.942 00:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:25.529 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:25.529 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:25.529 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:25.529 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.529 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.530 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.530 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.530 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.530 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.530 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.530 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.530 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.530 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.838 ms 00:08:25.530 00:08:25.530 --- 10.0.0.2 ping statistics --- 00:08:25.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.530 rtt min/avg/max/mdev = 0.838/0.838/0.838/0.000 ms 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.453 ms 00:08:25.790 00:08:25.790 --- 10.0.0.1 ping statistics --- 00:08:25.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.790 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=238109 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 238109 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 238109 ']' 00:08:25.790 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.791 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:25.791 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.791 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:25.791 00:33:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:25.791 [2024-06-08 00:33:43.922438] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:08:25.791 [2024-06-08 00:33:43.922487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.791 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.791 [2024-06-08 00:33:43.988508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.791 [2024-06-08 00:33:44.053606] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.791 [2024-06-08 00:33:44.053644] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.791 [2024-06-08 00:33:44.053651] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.791 [2024-06-08 00:33:44.053658] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.791 [2024-06-08 00:33:44.053663] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.791 [2024-06-08 00:33:44.053798] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.791 [2024-06-08 00:33:44.053911] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.791 [2024-06-08 00:33:44.054064] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.791 [2024-06-08 00:33:44.054065] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.733 [2024-06-08 00:33:44.738996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.733 [2024-06-08 00:33:44.798442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:26.733 00:33:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:29.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.656 rmmod nvme_tcp 00:12:19.656 rmmod nvme_fabrics 00:12:19.656 rmmod nvme_keyring 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 238109 ']' 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 238109 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 238109 ']' 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 238109 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 238109 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 238109' 00:12:19.656 killing process with pid 238109 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 238109 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 238109 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.656 00:37:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.202 00:37:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.202 00:12:22.202 real 4m3.183s 00:12:22.202 user 15m28.952s 00:12:22.202 sys 0m21.607s 00:12:22.202 00:37:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:22.202 00:37:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.202 ************************************ 00:12:22.202 END TEST nvmf_connect_disconnect 00:12:22.202 ************************************ 00:12:22.202 00:37:39 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:22.202 00:37:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:22.202 00:37:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:22.202 00:37:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.202 ************************************ 00:12:22.202 START TEST nvmf_multitarget 00:12:22.202 ************************************ 00:12:22.202 00:37:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:22.202 * Looking for test storage... 00:12:22.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.202 00:37:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.202 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:22.202 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.202 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.202 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.202 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:22.203 00:37:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:28.799 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:28.799 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:28.799 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:28.799 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.799 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.800 00:37:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:12:28.800 00:12:28.800 --- 10.0.0.2 ping statistics --- 00:12:28.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.800 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:12:28.800 00:12:28.800 --- 10.0.0.1 ping statistics --- 00:12:28.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.800 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=289731 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 289731 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 289731 ']' 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:28.800 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.062 [2024-06-08 00:37:47.134391] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:12:29.062 [2024-06-08 00:37:47.134492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.062 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.062 [2024-06-08 00:37:47.205343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.062 [2024-06-08 00:37:47.280303] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.062 [2024-06-08 00:37:47.280341] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.062 [2024-06-08 00:37:47.280348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.062 [2024-06-08 00:37:47.280355] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.062 [2024-06-08 00:37:47.280361] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.062 [2024-06-08 00:37:47.280501] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.062 [2024-06-08 00:37:47.280622] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.062 [2024-06-08 00:37:47.280779] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.062 [2024-06-08 00:37:47.280780] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.634 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:29.634 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:12:29.634 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.634 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:29.634 00:37:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.895 00:37:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.895 00:37:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:29.895 00:37:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.895 00:37:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:29.895 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:29.895 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:29.895 "nvmf_tgt_1" 00:12:29.895 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:30.155 "nvmf_tgt_2" 00:12:30.155 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:30.155 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:30.155 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:30.155 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:30.155 true 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:30.416 true 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.416 rmmod nvme_tcp 00:12:30.416 rmmod nvme_fabrics 00:12:30.416 rmmod nvme_keyring 00:12:30.416 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 289731 ']' 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 289731 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 289731 ']' 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 289731 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 289731 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:30.677 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 289731' 00:12:30.678 killing process with pid 289731 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 289731 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 289731 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.678 00:37:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.225 00:37:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.225 00:12:33.225 real 0m11.009s 00:12:33.225 user 0m9.165s 00:12:33.225 sys 0m5.608s 00:12:33.225 00:37:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:33.225 00:37:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:33.225 ************************************ 00:12:33.225 END TEST nvmf_multitarget 00:12:33.225 ************************************ 00:12:33.225 00:37:51 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:33.225 00:37:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:33.225 00:37:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:33.225 00:37:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:33.225 ************************************ 00:12:33.225 START TEST nvmf_rpc 00:12:33.225 ************************************ 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:33.225 * Looking for test storage... 00:12:33.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:33.225 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.226 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:39.846 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:39.846 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:39.846 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.846 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:39.847 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.847 00:37:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.847 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.847 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.847 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.847 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:40.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:12:40.109 00:12:40.109 --- 10.0.0.2 ping statistics --- 00:12:40.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.109 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:12:40.109 00:12:40.109 --- 10.0.0.1 ping statistics --- 00:12:40.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.109 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=294186 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 294186 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 294186 ']' 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:40.109 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.109 [2024-06-08 00:37:58.332099] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:12:40.109 [2024-06-08 00:37:58.332158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.109 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.370 [2024-06-08 00:37:58.401590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.370 [2024-06-08 00:37:58.477115] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.370 [2024-06-08 00:37:58.477153] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.370 [2024-06-08 00:37:58.477161] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.370 [2024-06-08 00:37:58.477167] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.370 [2024-06-08 00:37:58.477173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.370 [2024-06-08 00:37:58.477314] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.370 [2024-06-08 00:37:58.477443] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.370 [2024-06-08 00:37:58.477541] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.370 [2024-06-08 00:37:58.477542] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.942 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:40.942 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:12:40.942 00:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.942 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:40.942 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:40.943 "tick_rate": 2400000000, 00:12:40.943 "poll_groups": [ 00:12:40.943 { 00:12:40.943 "name": "nvmf_tgt_poll_group_000", 00:12:40.943 "admin_qpairs": 0, 00:12:40.943 "io_qpairs": 0, 00:12:40.943 "current_admin_qpairs": 0, 00:12:40.943 "current_io_qpairs": 0, 00:12:40.943 "pending_bdev_io": 0, 00:12:40.943 "completed_nvme_io": 0, 00:12:40.943 "transports": [] 00:12:40.943 }, 00:12:40.943 { 00:12:40.943 "name": "nvmf_tgt_poll_group_001", 00:12:40.943 "admin_qpairs": 0, 00:12:40.943 "io_qpairs": 0, 00:12:40.943 "current_admin_qpairs": 0, 00:12:40.943 "current_io_qpairs": 0, 00:12:40.943 "pending_bdev_io": 0, 00:12:40.943 "completed_nvme_io": 0, 00:12:40.943 "transports": [] 00:12:40.943 }, 00:12:40.943 { 00:12:40.943 "name": "nvmf_tgt_poll_group_002", 00:12:40.943 "admin_qpairs": 0, 00:12:40.943 "io_qpairs": 0, 00:12:40.943 "current_admin_qpairs": 0, 00:12:40.943 "current_io_qpairs": 0, 00:12:40.943 "pending_bdev_io": 0, 00:12:40.943 "completed_nvme_io": 0, 00:12:40.943 "transports": [] 00:12:40.943 }, 00:12:40.943 { 00:12:40.943 "name": "nvmf_tgt_poll_group_003", 00:12:40.943 "admin_qpairs": 0, 00:12:40.943 "io_qpairs": 0, 00:12:40.943 "current_admin_qpairs": 0, 00:12:40.943 "current_io_qpairs": 0, 00:12:40.943 "pending_bdev_io": 0, 00:12:40.943 "completed_nvme_io": 0, 00:12:40.943 "transports": [] 00:12:40.943 } 00:12:40.943 ] 00:12:40.943 }' 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:40.943 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.204 [2024-06-08 00:37:59.279383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.204 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:41.204 "tick_rate": 2400000000, 00:12:41.204 "poll_groups": [ 00:12:41.204 { 00:12:41.204 "name": "nvmf_tgt_poll_group_000", 00:12:41.204 "admin_qpairs": 0, 00:12:41.204 "io_qpairs": 0, 00:12:41.204 "current_admin_qpairs": 0, 00:12:41.204 "current_io_qpairs": 0, 00:12:41.204 "pending_bdev_io": 0, 00:12:41.204 "completed_nvme_io": 0, 00:12:41.204 "transports": [ 00:12:41.204 { 00:12:41.204 "trtype": "TCP" 00:12:41.204 } 00:12:41.204 ] 00:12:41.204 }, 00:12:41.204 { 00:12:41.204 "name": "nvmf_tgt_poll_group_001", 00:12:41.204 "admin_qpairs": 0, 00:12:41.204 "io_qpairs": 0, 00:12:41.204 "current_admin_qpairs": 0, 00:12:41.204 "current_io_qpairs": 0, 00:12:41.204 "pending_bdev_io": 0, 00:12:41.204 "completed_nvme_io": 0, 00:12:41.204 "transports": [ 00:12:41.204 { 00:12:41.204 "trtype": "TCP" 00:12:41.204 } 00:12:41.204 ] 00:12:41.204 }, 00:12:41.204 { 00:12:41.204 "name": "nvmf_tgt_poll_group_002", 00:12:41.204 "admin_qpairs": 0, 00:12:41.204 "io_qpairs": 0, 00:12:41.204 "current_admin_qpairs": 0, 00:12:41.204 "current_io_qpairs": 0, 00:12:41.204 "pending_bdev_io": 0, 00:12:41.204 "completed_nvme_io": 0, 00:12:41.204 "transports": [ 00:12:41.204 { 00:12:41.204 "trtype": "TCP" 00:12:41.204 } 00:12:41.204 ] 00:12:41.204 }, 00:12:41.204 { 00:12:41.204 "name": "nvmf_tgt_poll_group_003", 00:12:41.204 "admin_qpairs": 0, 00:12:41.204 "io_qpairs": 0, 00:12:41.204 "current_admin_qpairs": 0, 00:12:41.204 "current_io_qpairs": 0, 00:12:41.204 "pending_bdev_io": 0, 00:12:41.204 "completed_nvme_io": 0, 00:12:41.204 "transports": [ 00:12:41.204 { 00:12:41.204 "trtype": "TCP" 00:12:41.205 } 00:12:41.205 ] 00:12:41.205 } 00:12:41.205 ] 00:12:41.205 }' 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.205 Malloc1 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.205 [2024-06-08 00:37:59.451122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:41.205 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:41.205 [2024-06-08 00:37:59.477861] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:41.465 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:41.465 could not add new controller: failed to write to nvme-fabrics device 00:12:41.465 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:41.465 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:41.465 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:41.465 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:41.465 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:41.465 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.465 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.465 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.465 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.852 00:38:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.852 00:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:42.852 00:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.852 00:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:42.852 00:38:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:44.773 00:38:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:44.773 00:38:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:44.773 00:38:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.773 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:44.773 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.773 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:44.773 00:38:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.034 [2024-06-08 00:38:03.192935] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:45.034 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:45.034 could not add new controller: failed to write to nvme-fabrics device 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.034 00:38:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.948 00:38:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.948 00:38:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:46.948 00:38:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.948 00:38:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:46.948 00:38:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.862 [2024-06-08 00:38:06.904158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:48.862 00:38:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.246 00:38:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.246 00:38:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:50.246 00:38:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.246 00:38:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:50.246 00:38:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:52.160 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:52.160 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:52.160 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.160 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:52.160 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.160 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:52.160 00:38:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.421 [2024-06-08 00:38:10.605777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.421 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.422 00:38:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.422 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.422 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.422 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.422 00:38:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.422 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.422 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.422 00:38:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.422 00:38:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.336 00:38:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.336 00:38:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:54.336 00:38:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.336 00:38:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:54.336 00:38:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.297 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.298 [2024-06-08 00:38:14.317782] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:56.298 00:38:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.680 00:38:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.680 00:38:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:57.680 00:38:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.680 00:38:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:57.680 00:38:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:00.226 00:38:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.226 [2024-06-08 00:38:18.060146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.226 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:00.227 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.227 00:38:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:00.227 00:38:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.612 00:38:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.612 00:38:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:13:01.612 00:38:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.612 00:38:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:01.612 00:38:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.551 00:38:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.552 [2024-06-08 00:38:21.779697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.552 00:38:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.476 00:38:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.476 00:38:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:13:05.476 00:38:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.476 00:38:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:05.476 00:38:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:13:07.390 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:07.390 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:07.390 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.390 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:07.390 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.390 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:13:07.390 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.390 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.390 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:13:07.390 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.391 [2024-06-08 00:38:25.647158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.391 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 [2024-06-08 00:38:25.707279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 [2024-06-08 00:38:25.771467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.652 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 [2024-06-08 00:38:25.827658] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 [2024-06-08 00:38:25.887844] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.653 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.914 00:38:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.914 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:07.914 "tick_rate": 2400000000, 00:13:07.914 "poll_groups": [ 00:13:07.914 { 00:13:07.914 "name": "nvmf_tgt_poll_group_000", 00:13:07.914 "admin_qpairs": 0, 00:13:07.914 "io_qpairs": 224, 00:13:07.914 "current_admin_qpairs": 0, 00:13:07.914 "current_io_qpairs": 0, 00:13:07.914 "pending_bdev_io": 0, 00:13:07.914 "completed_nvme_io": 463, 00:13:07.914 "transports": [ 00:13:07.914 { 00:13:07.914 "trtype": "TCP" 00:13:07.914 } 00:13:07.914 ] 00:13:07.914 }, 00:13:07.914 { 00:13:07.914 "name": "nvmf_tgt_poll_group_001", 00:13:07.914 "admin_qpairs": 1, 00:13:07.914 "io_qpairs": 223, 00:13:07.914 "current_admin_qpairs": 0, 00:13:07.914 "current_io_qpairs": 0, 00:13:07.914 "pending_bdev_io": 0, 00:13:07.914 "completed_nvme_io": 328, 00:13:07.914 "transports": [ 00:13:07.914 { 00:13:07.914 "trtype": "TCP" 00:13:07.914 } 00:13:07.914 ] 00:13:07.914 }, 00:13:07.914 { 00:13:07.914 "name": "nvmf_tgt_poll_group_002", 00:13:07.914 "admin_qpairs": 6, 00:13:07.914 "io_qpairs": 218, 00:13:07.914 "current_admin_qpairs": 0, 00:13:07.914 "current_io_qpairs": 0, 00:13:07.914 "pending_bdev_io": 0, 00:13:07.914 "completed_nvme_io": 220, 00:13:07.914 "transports": [ 00:13:07.914 { 00:13:07.914 "trtype": "TCP" 00:13:07.914 } 00:13:07.914 ] 00:13:07.914 }, 00:13:07.914 { 00:13:07.914 "name": "nvmf_tgt_poll_group_003", 00:13:07.914 "admin_qpairs": 0, 00:13:07.914 "io_qpairs": 224, 00:13:07.914 "current_admin_qpairs": 0, 00:13:07.914 "current_io_qpairs": 0, 00:13:07.914 "pending_bdev_io": 0, 00:13:07.914 "completed_nvme_io": 228, 00:13:07.914 "transports": [ 00:13:07.914 { 00:13:07.914 "trtype": "TCP" 00:13:07.914 } 00:13:07.914 ] 00:13:07.914 } 00:13:07.914 ] 00:13:07.914 }' 00:13:07.914 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:07.914 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:07.914 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:07.914 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:07.914 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:07.914 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:07.914 00:38:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.915 rmmod nvme_tcp 00:13:07.915 rmmod nvme_fabrics 00:13:07.915 rmmod nvme_keyring 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 294186 ']' 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 294186 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 294186 ']' 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 294186 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 294186 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 294186' 00:13:07.915 killing process with pid 294186 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 294186 00:13:07.915 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 294186 00:13:08.176 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.176 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.176 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.176 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.176 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.176 00:38:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.176 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.176 00:38:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.725 00:38:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:10.725 00:13:10.725 real 0m37.341s 00:13:10.725 user 1m53.288s 00:13:10.725 sys 0m7.070s 00:13:10.725 00:38:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:10.725 00:38:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 ************************************ 00:13:10.725 END TEST nvmf_rpc 00:13:10.725 ************************************ 00:13:10.725 00:38:28 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:10.725 00:38:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:10.725 00:38:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:10.725 00:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 ************************************ 00:13:10.725 START TEST nvmf_invalid 00:13:10.725 ************************************ 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:10.725 * Looking for test storage... 00:13:10.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.725 00:38:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:10.726 00:38:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:17.315 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:17.316 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:17.316 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:17.316 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:17.316 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.316 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:13:17.578 00:13:17.578 --- 10.0.0.2 ping statistics --- 00:13:17.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.578 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:13:17.578 00:13:17.578 --- 10.0.0.1 ping statistics --- 00:13:17.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.578 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=303913 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 303913 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 303913 ']' 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:17.578 00:38:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:17.578 [2024-06-08 00:38:35.768378] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:17.578 [2024-06-08 00:38:35.768456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.578 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.578 [2024-06-08 00:38:35.838711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.839 [2024-06-08 00:38:35.913814] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.839 [2024-06-08 00:38:35.913855] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.839 [2024-06-08 00:38:35.913863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.839 [2024-06-08 00:38:35.913870] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.839 [2024-06-08 00:38:35.913875] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.839 [2024-06-08 00:38:35.914015] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.839 [2024-06-08 00:38:35.914131] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.839 [2024-06-08 00:38:35.914288] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.839 [2024-06-08 00:38:35.914290] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.410 00:38:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:18.410 00:38:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:13:18.410 00:38:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.410 00:38:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:18.410 00:38:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.410 00:38:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.410 00:38:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:18.410 00:38:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28867 00:13:18.671 [2024-06-08 00:38:36.740369] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:18.671 00:38:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:18.671 { 00:13:18.671 "nqn": "nqn.2016-06.io.spdk:cnode28867", 00:13:18.671 "tgt_name": "foobar", 00:13:18.671 "method": "nvmf_create_subsystem", 00:13:18.671 "req_id": 1 00:13:18.671 } 00:13:18.671 Got JSON-RPC error response 00:13:18.671 response: 00:13:18.671 { 00:13:18.671 "code": -32603, 00:13:18.671 "message": "Unable to find target foobar" 00:13:18.671 }' 00:13:18.671 00:38:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:18.671 { 00:13:18.671 "nqn": "nqn.2016-06.io.spdk:cnode28867", 00:13:18.671 "tgt_name": "foobar", 00:13:18.671 "method": "nvmf_create_subsystem", 00:13:18.671 "req_id": 1 00:13:18.671 } 00:13:18.671 Got JSON-RPC error response 00:13:18.671 response: 00:13:18.671 { 00:13:18.671 "code": -32603, 00:13:18.671 "message": "Unable to find target foobar" 00:13:18.671 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:18.671 00:38:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:18.671 00:38:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27324 00:13:18.671 [2024-06-08 00:38:36.916972] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27324: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:18.671 00:38:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:18.671 { 00:13:18.671 "nqn": "nqn.2016-06.io.spdk:cnode27324", 00:13:18.671 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:18.671 "method": "nvmf_create_subsystem", 00:13:18.671 "req_id": 1 00:13:18.671 } 00:13:18.671 Got JSON-RPC error response 00:13:18.671 response: 00:13:18.671 { 00:13:18.671 "code": -32602, 00:13:18.671 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:18.671 }' 00:13:18.671 00:38:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:18.671 { 00:13:18.671 "nqn": "nqn.2016-06.io.spdk:cnode27324", 00:13:18.671 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:18.671 "method": "nvmf_create_subsystem", 00:13:18.671 "req_id": 1 00:13:18.671 } 00:13:18.671 Got JSON-RPC error response 00:13:18.671 response: 00:13:18.671 { 00:13:18.671 "code": -32602, 00:13:18.671 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:18.671 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:18.671 00:38:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:18.671 00:38:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23224 00:13:18.932 [2024-06-08 00:38:37.089550] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23224: invalid model number 'SPDK_Controller' 00:13:18.932 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:18.932 { 00:13:18.932 "nqn": "nqn.2016-06.io.spdk:cnode23224", 00:13:18.932 "model_number": "SPDK_Controller\u001f", 00:13:18.932 "method": "nvmf_create_subsystem", 00:13:18.932 "req_id": 1 00:13:18.932 } 00:13:18.932 Got JSON-RPC error response 00:13:18.932 response: 00:13:18.932 { 00:13:18.932 "code": -32602, 00:13:18.932 "message": "Invalid MN SPDK_Controller\u001f" 00:13:18.932 }' 00:13:18.932 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:18.932 { 00:13:18.932 "nqn": "nqn.2016-06.io.spdk:cnode23224", 00:13:18.932 "model_number": "SPDK_Controller\u001f", 00:13:18.932 "method": "nvmf_create_subsystem", 00:13:18.932 "req_id": 1 00:13:18.932 } 00:13:18.932 Got JSON-RPC error response 00:13:18.932 response: 00:13:18.932 { 00:13:18.932 "code": -32602, 00:13:18.932 "message": "Invalid MN SPDK_Controller\u001f" 00:13:18.932 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:18.932 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:18.932 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.933 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.194 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ L == \- ]] 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'L$DlywG14*B~xWkxYM-_o' 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'L$DlywG14*B~xWkxYM-_o' nqn.2016-06.io.spdk:cnode2836 00:13:19.195 [2024-06-08 00:38:37.422625] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2836: invalid serial number 'L$DlywG14*B~xWkxYM-_o' 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:19.195 { 00:13:19.195 "nqn": "nqn.2016-06.io.spdk:cnode2836", 00:13:19.195 "serial_number": "L$DlywG14*B~xWkxYM-_o", 00:13:19.195 "method": "nvmf_create_subsystem", 00:13:19.195 "req_id": 1 00:13:19.195 } 00:13:19.195 Got JSON-RPC error response 00:13:19.195 response: 00:13:19.195 { 00:13:19.195 "code": -32602, 00:13:19.195 "message": "Invalid SN L$DlywG14*B~xWkxYM-_o" 00:13:19.195 }' 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:19.195 { 00:13:19.195 "nqn": "nqn.2016-06.io.spdk:cnode2836", 00:13:19.195 "serial_number": "L$DlywG14*B~xWkxYM-_o", 00:13:19.195 "method": "nvmf_create_subsystem", 00:13:19.195 "req_id": 1 00:13:19.195 } 00:13:19.195 Got JSON-RPC error response 00:13:19.195 response: 00:13:19.195 { 00:13:19.195 "code": -32602, 00:13:19.195 "message": "Invalid SN L$DlywG14*B~xWkxYM-_o" 00:13:19.195 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.195 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.456 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.457 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.458 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'gDDUohxT-)[PU2>f?C$~uiCq{M|h`sv&ya\kj6 f' 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'gDDUohxT-)[PU2>f?C$~uiCq{M|h`sv&ya\kj6 f' nqn.2016-06.io.spdk:cnode24762 00:13:19.719 [2024-06-08 00:38:37.908226] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24762: invalid model number 'gDDUohxT-)[PU2>f?C$~uiCq{M|h`sv&ya\kj6 f' 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:19.719 { 00:13:19.719 "nqn": "nqn.2016-06.io.spdk:cnode24762", 00:13:19.719 "model_number": "gDDUohxT-)[PU2>f?C$~uiCq{M|h`sv\u007f&ya\\kj6 f", 00:13:19.719 "method": "nvmf_create_subsystem", 00:13:19.719 "req_id": 1 00:13:19.719 } 00:13:19.719 Got JSON-RPC error response 00:13:19.719 response: 00:13:19.719 { 00:13:19.719 "code": -32602, 00:13:19.719 "message": "Invalid MN gDDUohxT-)[PU2>f?C$~uiCq{M|h`sv\u007f&ya\\kj6 f" 00:13:19.719 }' 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:19.719 { 00:13:19.719 "nqn": "nqn.2016-06.io.spdk:cnode24762", 00:13:19.719 "model_number": "gDDUohxT-)[PU2>f?C$~uiCq{M|h`sv\u007f&ya\\kj6 f", 00:13:19.719 "method": "nvmf_create_subsystem", 00:13:19.719 "req_id": 1 00:13:19.719 } 00:13:19.719 Got JSON-RPC error response 00:13:19.719 response: 00:13:19.719 { 00:13:19.719 "code": -32602, 00:13:19.719 "message": "Invalid MN gDDUohxT-)[PU2>f?C$~uiCq{M|h`sv\u007f&ya\\kj6 f" 00:13:19.719 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:19.719 00:38:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:19.980 [2024-06-08 00:38:38.080848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.980 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:20.241 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:20.241 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:20.241 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:20.241 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:20.241 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:20.241 [2024-06-08 00:38:38.427381] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:20.241 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:20.241 { 00:13:20.241 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:20.241 "listen_address": { 00:13:20.241 "trtype": "tcp", 00:13:20.241 "traddr": "", 00:13:20.241 "trsvcid": "4421" 00:13:20.241 }, 00:13:20.241 "method": "nvmf_subsystem_remove_listener", 00:13:20.241 "req_id": 1 00:13:20.242 } 00:13:20.242 Got JSON-RPC error response 00:13:20.242 response: 00:13:20.242 { 00:13:20.242 "code": -32602, 00:13:20.242 "message": "Invalid parameters" 00:13:20.242 }' 00:13:20.242 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:20.242 { 00:13:20.242 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:20.242 "listen_address": { 00:13:20.242 "trtype": "tcp", 00:13:20.242 "traddr": "", 00:13:20.242 "trsvcid": "4421" 00:13:20.242 }, 00:13:20.242 "method": "nvmf_subsystem_remove_listener", 00:13:20.242 "req_id": 1 00:13:20.242 } 00:13:20.242 Got JSON-RPC error response 00:13:20.242 response: 00:13:20.242 { 00:13:20.242 "code": -32602, 00:13:20.242 "message": "Invalid parameters" 00:13:20.242 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:20.242 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30652 -i 0 00:13:20.503 [2024-06-08 00:38:38.591853] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30652: invalid cntlid range [0-65519] 00:13:20.503 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:20.503 { 00:13:20.503 "nqn": "nqn.2016-06.io.spdk:cnode30652", 00:13:20.503 "min_cntlid": 0, 00:13:20.503 "method": "nvmf_create_subsystem", 00:13:20.503 "req_id": 1 00:13:20.503 } 00:13:20.503 Got JSON-RPC error response 00:13:20.503 response: 00:13:20.503 { 00:13:20.503 "code": -32602, 00:13:20.503 "message": "Invalid cntlid range [0-65519]" 00:13:20.503 }' 00:13:20.503 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:20.503 { 00:13:20.503 "nqn": "nqn.2016-06.io.spdk:cnode30652", 00:13:20.503 "min_cntlid": 0, 00:13:20.503 "method": "nvmf_create_subsystem", 00:13:20.503 "req_id": 1 00:13:20.503 } 00:13:20.503 Got JSON-RPC error response 00:13:20.503 response: 00:13:20.503 { 00:13:20.503 "code": -32602, 00:13:20.503 "message": "Invalid cntlid range [0-65519]" 00:13:20.503 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.503 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23501 -i 65520 00:13:20.503 [2024-06-08 00:38:38.764419] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23501: invalid cntlid range [65520-65519] 00:13:20.772 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:20.772 { 00:13:20.772 "nqn": "nqn.2016-06.io.spdk:cnode23501", 00:13:20.772 "min_cntlid": 65520, 00:13:20.772 "method": "nvmf_create_subsystem", 00:13:20.772 "req_id": 1 00:13:20.772 } 00:13:20.772 Got JSON-RPC error response 00:13:20.772 response: 00:13:20.772 { 00:13:20.772 "code": -32602, 00:13:20.772 "message": "Invalid cntlid range [65520-65519]" 00:13:20.772 }' 00:13:20.772 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:20.772 { 00:13:20.772 "nqn": "nqn.2016-06.io.spdk:cnode23501", 00:13:20.772 "min_cntlid": 65520, 00:13:20.772 "method": "nvmf_create_subsystem", 00:13:20.772 "req_id": 1 00:13:20.772 } 00:13:20.772 Got JSON-RPC error response 00:13:20.772 response: 00:13:20.772 { 00:13:20.772 "code": -32602, 00:13:20.772 "message": "Invalid cntlid range [65520-65519]" 00:13:20.772 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.772 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31375 -I 0 00:13:20.772 [2024-06-08 00:38:38.928962] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31375: invalid cntlid range [1-0] 00:13:20.772 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:20.772 { 00:13:20.772 "nqn": "nqn.2016-06.io.spdk:cnode31375", 00:13:20.772 "max_cntlid": 0, 00:13:20.772 "method": "nvmf_create_subsystem", 00:13:20.772 "req_id": 1 00:13:20.772 } 00:13:20.772 Got JSON-RPC error response 00:13:20.772 response: 00:13:20.772 { 00:13:20.772 "code": -32602, 00:13:20.772 "message": "Invalid cntlid range [1-0]" 00:13:20.772 }' 00:13:20.772 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:20.772 { 00:13:20.772 "nqn": "nqn.2016-06.io.spdk:cnode31375", 00:13:20.772 "max_cntlid": 0, 00:13:20.772 "method": "nvmf_create_subsystem", 00:13:20.772 "req_id": 1 00:13:20.772 } 00:13:20.772 Got JSON-RPC error response 00:13:20.772 response: 00:13:20.772 { 00:13:20.772 "code": -32602, 00:13:20.772 "message": "Invalid cntlid range [1-0]" 00:13:20.772 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.772 00:38:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20835 -I 65520 00:13:21.033 [2024-06-08 00:38:39.105493] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20835: invalid cntlid range [1-65520] 00:13:21.033 00:38:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:21.033 { 00:13:21.033 "nqn": "nqn.2016-06.io.spdk:cnode20835", 00:13:21.033 "max_cntlid": 65520, 00:13:21.033 "method": "nvmf_create_subsystem", 00:13:21.033 "req_id": 1 00:13:21.033 } 00:13:21.033 Got JSON-RPC error response 00:13:21.033 response: 00:13:21.033 { 00:13:21.033 "code": -32602, 00:13:21.033 "message": "Invalid cntlid range [1-65520]" 00:13:21.033 }' 00:13:21.033 00:38:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:21.033 { 00:13:21.033 "nqn": "nqn.2016-06.io.spdk:cnode20835", 00:13:21.033 "max_cntlid": 65520, 00:13:21.033 "method": "nvmf_create_subsystem", 00:13:21.033 "req_id": 1 00:13:21.033 } 00:13:21.033 Got JSON-RPC error response 00:13:21.033 response: 00:13:21.033 { 00:13:21.033 "code": -32602, 00:13:21.033 "message": "Invalid cntlid range [1-65520]" 00:13:21.033 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.033 00:38:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15303 -i 6 -I 5 00:13:21.033 [2024-06-08 00:38:39.278043] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15303: invalid cntlid range [6-5] 00:13:21.033 00:38:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:21.033 { 00:13:21.033 "nqn": "nqn.2016-06.io.spdk:cnode15303", 00:13:21.033 "min_cntlid": 6, 00:13:21.033 "max_cntlid": 5, 00:13:21.033 "method": "nvmf_create_subsystem", 00:13:21.033 "req_id": 1 00:13:21.033 } 00:13:21.033 Got JSON-RPC error response 00:13:21.033 response: 00:13:21.033 { 00:13:21.033 "code": -32602, 00:13:21.033 "message": "Invalid cntlid range [6-5]" 00:13:21.033 }' 00:13:21.033 00:38:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:21.033 { 00:13:21.033 "nqn": "nqn.2016-06.io.spdk:cnode15303", 00:13:21.033 "min_cntlid": 6, 00:13:21.033 "max_cntlid": 5, 00:13:21.033 "method": "nvmf_create_subsystem", 00:13:21.033 "req_id": 1 00:13:21.033 } 00:13:21.033 Got JSON-RPC error response 00:13:21.033 response: 00:13:21.033 { 00:13:21.033 "code": -32602, 00:13:21.033 "message": "Invalid cntlid range [6-5]" 00:13:21.033 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.033 00:38:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:21.294 00:38:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:21.294 { 00:13:21.294 "name": "foobar", 00:13:21.294 "method": "nvmf_delete_target", 00:13:21.294 "req_id": 1 00:13:21.294 } 00:13:21.294 Got JSON-RPC error response 00:13:21.294 response: 00:13:21.294 { 00:13:21.294 "code": -32602, 00:13:21.294 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:21.294 }' 00:13:21.294 00:38:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:21.294 { 00:13:21.294 "name": "foobar", 00:13:21.294 "method": "nvmf_delete_target", 00:13:21.294 "req_id": 1 00:13:21.294 } 00:13:21.294 Got JSON-RPC error response 00:13:21.294 response: 00:13:21.294 { 00:13:21.294 "code": -32602, 00:13:21.294 "message": "The specified target doesn't exist, cannot delete it." 00:13:21.294 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.295 rmmod nvme_tcp 00:13:21.295 rmmod nvme_fabrics 00:13:21.295 rmmod nvme_keyring 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 303913 ']' 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 303913 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 303913 ']' 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 303913 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 303913 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 303913' 00:13:21.295 killing process with pid 303913 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 303913 00:13:21.295 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 303913 00:13:21.555 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:21.555 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:21.555 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:21.555 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.555 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:21.555 00:38:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.555 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.555 00:38:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.467 00:38:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:23.467 00:13:23.467 real 0m13.257s 00:13:23.467 user 0m19.140s 00:13:23.467 sys 0m6.173s 00:13:23.467 00:38:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:23.467 00:38:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:23.467 ************************************ 00:13:23.467 END TEST nvmf_invalid 00:13:23.468 ************************************ 00:13:23.729 00:38:41 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:23.729 00:38:41 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:23.729 00:38:41 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:23.729 00:38:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.729 ************************************ 00:13:23.729 START TEST nvmf_abort 00:13:23.729 ************************************ 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:23.729 * Looking for test storage... 00:13:23.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:23.729 00:38:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:30.346 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:30.346 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:30.346 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:30.346 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.346 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.607 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:13:30.868 00:13:30.868 --- 10.0.0.2 ping statistics --- 00:13:30.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.868 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:13:30.868 00:13:30.868 --- 10.0.0.1 ping statistics --- 00:13:30.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.868 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=308885 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 308885 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 308885 ']' 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:30.868 00:38:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:30.868 [2024-06-08 00:38:49.034423] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:30.868 [2024-06-08 00:38:49.034484] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.868 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.868 [2024-06-08 00:38:49.104126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.129 [2024-06-08 00:38:49.199819] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.129 [2024-06-08 00:38:49.199870] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.129 [2024-06-08 00:38:49.199878] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.129 [2024-06-08 00:38:49.199885] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.129 [2024-06-08 00:38:49.199891] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.129 [2024-06-08 00:38:49.200020] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.129 [2024-06-08 00:38:49.200186] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.129 [2024-06-08 00:38:49.200188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.701 [2024-06-08 00:38:49.850314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.701 Malloc0 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.701 Delay0 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.701 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.701 [2024-06-08 00:38:49.927844] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.702 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.702 00:38:49 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:31.702 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.702 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:31.702 00:38:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.702 00:38:49 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:31.702 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.962 [2024-06-08 00:38:50.046168] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:34.510 Initializing NVMe Controllers 00:13:34.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:34.510 controller IO queue size 128 less than required 00:13:34.510 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:34.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:34.510 Initialization complete. Launching workers. 00:13:34.510 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 32027 00:13:34.510 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32089, failed to submit 62 00:13:34.510 success 32031, unsuccess 58, failed 0 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.510 rmmod nvme_tcp 00:13:34.510 rmmod nvme_fabrics 00:13:34.510 rmmod nvme_keyring 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 308885 ']' 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 308885 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 308885 ']' 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 308885 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 308885 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 308885' 00:13:34.510 killing process with pid 308885 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 308885 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 308885 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.510 00:38:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.425 00:38:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.425 00:13:36.425 real 0m12.829s 00:13:36.425 user 0m13.898s 00:13:36.425 sys 0m6.101s 00:13:36.425 00:38:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:36.425 00:38:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.425 ************************************ 00:13:36.425 END TEST nvmf_abort 00:13:36.425 ************************************ 00:13:36.425 00:38:54 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:36.425 00:38:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:36.425 00:38:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:36.425 00:38:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.425 ************************************ 00:13:36.425 START TEST nvmf_ns_hotplug_stress 00:13:36.425 ************************************ 00:13:36.425 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:36.687 * Looking for test storage... 00:13:36.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.687 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:44.835 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:44.835 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:44.835 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.835 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:44.836 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:44.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:13:44.836 00:13:44.836 --- 10.0.0.2 ping statistics --- 00:13:44.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.836 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:13:44.836 00:13:44.836 --- 10.0.0.1 ping statistics --- 00:13:44.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.836 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=313960 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 313960 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 313960 ']' 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:44.836 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.836 [2024-06-08 00:39:02.010767] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:44.836 [2024-06-08 00:39:02.010831] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.836 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.836 [2024-06-08 00:39:02.097231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.836 [2024-06-08 00:39:02.190557] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.836 [2024-06-08 00:39:02.190614] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.836 [2024-06-08 00:39:02.190622] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.836 [2024-06-08 00:39:02.190630] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.836 [2024-06-08 00:39:02.190636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.836 [2024-06-08 00:39:02.190799] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.836 [2024-06-08 00:39:02.190966] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.836 [2024-06-08 00:39:02.190967] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.836 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:44.836 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:13:44.836 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.836 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:44.836 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.836 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.836 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:44.836 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:44.836 [2024-06-08 00:39:02.980309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.836 00:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:45.097 00:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.097 [2024-06-08 00:39:03.321765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.097 00:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.357 00:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:45.619 Malloc0 00:13:45.619 00:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:45.619 Delay0 00:13:45.619 00:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.880 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:46.140 NULL1 00:13:46.140 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:46.140 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:46.140 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=314369 00:13:46.140 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:46.140 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.140 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.400 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.661 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:46.661 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:46.661 true 00:13:46.661 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:46.661 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.921 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.181 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:47.181 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:47.181 true 00:13:47.181 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:47.181 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.441 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.701 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:47.701 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:47.701 true 00:13:47.701 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:47.701 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.961 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.961 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:47.961 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:48.222 true 00:13:48.222 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:48.222 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.483 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.483 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:48.484 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:48.744 true 00:13:48.744 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:48.744 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.004 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.004 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:49.004 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:49.264 true 00:13:49.264 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:49.264 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.524 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.524 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:49.524 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:49.785 true 00:13:49.785 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:49.785 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.045 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.045 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:50.045 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:50.306 true 00:13:50.306 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:50.306 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.567 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.567 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:50.567 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:50.828 true 00:13:50.828 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:50.828 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.089 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.089 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:51.089 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:51.350 true 00:13:51.350 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:51.350 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.350 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.611 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:51.611 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:51.872 true 00:13:51.872 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:51.872 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.872 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.133 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:52.133 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:52.394 true 00:13:52.394 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:52.394 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.394 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.656 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:52.656 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:52.919 true 00:13:52.919 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:52.920 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.920 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.221 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:53.221 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:53.482 true 00:13:53.482 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:53.482 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.482 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.743 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:53.743 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:53.743 true 00:13:54.003 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:54.003 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.003 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.262 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:54.262 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:54.522 true 00:13:54.522 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:54.522 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.522 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.783 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:54.783 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:54.783 true 00:13:55.043 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:55.043 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.043 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.303 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:55.303 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:55.303 true 00:13:55.564 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:55.564 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.564 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.824 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:55.824 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:55.824 true 00:13:56.085 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:56.085 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.085 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.346 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:56.346 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:56.346 true 00:13:56.607 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:56.607 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.607 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.868 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:56.868 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:56.868 true 00:13:56.868 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:56.868 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.128 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.389 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:57.389 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:57.389 true 00:13:57.389 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:57.389 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.650 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.910 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:57.910 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:57.910 true 00:13:57.910 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:57.910 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.171 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.432 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:58.432 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:58.432 true 00:13:58.432 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:58.432 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.694 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.954 00:39:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:58.955 00:39:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:58.955 true 00:13:58.955 00:39:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:58.955 00:39:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.215 00:39:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.476 00:39:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:59.476 00:39:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:59.476 true 00:13:59.476 00:39:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:59.476 00:39:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.737 00:39:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.998 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:59.998 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:59.998 true 00:13:59.998 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:13:59.998 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.258 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.518 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:00.518 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:00.518 true 00:14:00.518 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:00.518 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.779 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.040 00:39:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:01.040 00:39:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:01.040 true 00:14:01.040 00:39:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:01.040 00:39:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.301 00:39:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.562 00:39:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:01.562 00:39:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:01.562 true 00:14:01.562 00:39:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:01.562 00:39:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.822 00:39:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.083 00:39:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:02.083 00:39:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:02.083 true 00:14:02.083 00:39:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:02.083 00:39:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.344 00:39:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.605 00:39:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:02.605 00:39:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:02.605 true 00:14:02.605 00:39:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:02.605 00:39:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.866 00:39:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.127 00:39:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:03.127 00:39:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:03.127 true 00:14:03.127 00:39:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:03.127 00:39:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.388 00:39:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.649 00:39:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:03.649 00:39:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:03.649 true 00:14:03.649 00:39:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:03.649 00:39:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.910 00:39:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.910 00:39:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:03.910 00:39:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:04.170 true 00:14:04.171 00:39:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:04.171 00:39:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.432 00:39:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.432 00:39:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:04.432 00:39:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:04.692 true 00:14:04.692 00:39:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:04.692 00:39:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.962 00:39:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.962 00:39:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:04.962 00:39:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:05.225 true 00:14:05.225 00:39:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:05.225 00:39:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.487 00:39:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.487 00:39:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:05.487 00:39:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:05.748 true 00:14:05.748 00:39:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:05.748 00:39:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.049 00:39:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.049 00:39:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:06.049 00:39:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:06.049 true 00:14:06.310 00:39:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:06.310 00:39:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.310 00:39:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.571 00:39:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:06.571 00:39:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:06.571 true 00:14:06.831 00:39:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:06.831 00:39:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.831 00:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.093 00:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:07.093 00:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:07.093 true 00:14:07.355 00:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:07.355 00:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.355 00:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.615 00:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:07.615 00:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:07.615 true 00:14:07.615 00:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:07.615 00:39:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.876 00:39:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.137 00:39:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:08.137 00:39:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:08.137 true 00:14:08.137 00:39:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:08.137 00:39:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.397 00:39:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.658 00:39:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:08.658 00:39:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:08.658 true 00:14:08.658 00:39:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:08.658 00:39:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.918 00:39:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.179 00:39:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:09.179 00:39:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:09.179 true 00:14:09.180 00:39:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:09.180 00:39:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.440 00:39:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.700 00:39:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:09.700 00:39:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:09.700 true 00:14:09.700 00:39:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:09.700 00:39:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.961 00:39:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.221 00:39:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:10.221 00:39:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:10.221 true 00:14:10.221 00:39:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:10.221 00:39:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.482 00:39:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.743 00:39:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:10.743 00:39:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:10.743 true 00:14:10.743 00:39:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:10.743 00:39:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.004 00:39:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.265 00:39:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:11.265 00:39:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:11.265 true 00:14:11.265 00:39:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:11.265 00:39:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.526 00:39:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.526 00:39:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:11.526 00:39:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:11.786 true 00:14:11.786 00:39:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:11.786 00:39:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.047 00:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.047 00:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:12.047 00:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:12.307 true 00:14:12.307 00:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:12.307 00:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.568 00:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.568 00:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:14:12.568 00:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:12.827 true 00:14:12.827 00:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:12.827 00:39:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.087 00:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.087 00:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:14:13.087 00:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:13.348 true 00:14:13.348 00:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:13.348 00:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.608 00:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.608 00:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:14:13.608 00:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:13.869 true 00:14:13.869 00:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:13.869 00:39:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.869 00:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.130 00:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:14:14.130 00:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:14.391 true 00:14:14.391 00:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:14.391 00:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.391 00:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.652 00:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:14:14.652 00:39:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:14:14.916 true 00:14:14.916 00:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:14.916 00:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.916 00:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.179 00:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:14:15.180 00:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:14:15.440 true 00:14:15.440 00:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:15.440 00:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.440 00:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.700 00:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:14:15.700 00:39:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:14:15.961 true 00:14:15.961 00:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:15.961 00:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.961 00:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.221 00:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:14:16.221 00:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:14:16.482 true 00:14:16.482 00:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:16.482 00:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.482 Initializing NVMe Controllers 00:14:16.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:16.482 Controller IO queue size 128, less than required. 00:14:16.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:16.483 Initialization complete. Launching workers. 00:14:16.483 ======================================================== 00:14:16.483 Latency(us) 00:14:16.483 Device Information : IOPS MiB/s Average min max 00:14:16.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31894.18 15.57 4013.24 1751.27 10310.13 00:14:16.483 ======================================================== 00:14:16.483 Total : 31894.18 15.57 4013.24 1751.27 10310.13 00:14:16.483 00:14:16.483 00:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.743 00:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:14:16.743 00:39:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:14:17.004 true 00:14:17.004 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 314369 00:14:17.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (314369) - No such process 00:14:17.004 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 314369 00:14:17.004 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.004 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:17.264 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:17.264 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:17.264 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:17.264 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.264 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:17.525 null0 00:14:17.525 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:17.525 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.525 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:17.525 null1 00:14:17.525 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:17.525 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.525 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:17.786 null2 00:14:17.786 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:17.786 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.786 00:39:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:17.786 null3 00:14:18.047 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.047 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.047 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:18.047 null4 00:14:18.047 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.047 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.047 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:18.307 null5 00:14:18.307 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.307 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.307 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:18.307 null6 00:14:18.307 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.307 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.307 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:18.569 null7 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:18.569 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 321374 321375 321377 321379 321381 321383 321384 321386 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.570 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:18.831 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.831 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.831 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:18.831 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:18.831 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:18.831 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:18.831 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.831 00:39:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:18.831 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.831 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.831 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:18.831 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.831 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.831 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.091 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.352 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.613 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.874 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.874 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.874 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.874 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:19.874 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:19.874 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.874 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.874 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.874 00:39:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.874 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.141 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.464 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.726 00:39:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.988 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.249 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.509 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.770 00:39:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.770 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.771 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.031 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.031 rmmod nvme_tcp 00:14:22.293 rmmod nvme_fabrics 00:14:22.293 rmmod nvme_keyring 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 313960 ']' 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 313960 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 313960 ']' 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 313960 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 313960 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 313960' 00:14:22.293 killing process with pid 313960 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 313960 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 313960 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.293 00:39:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.842 00:39:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.842 00:14:24.842 real 0m47.916s 00:14:24.842 user 3m6.756s 00:14:24.843 sys 0m20.325s 00:14:24.843 00:39:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:24.843 00:39:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.843 ************************************ 00:14:24.843 END TEST nvmf_ns_hotplug_stress 00:14:24.843 ************************************ 00:14:24.843 00:39:42 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:24.843 00:39:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:24.843 00:39:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:24.843 00:39:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.843 ************************************ 00:14:24.843 START TEST nvmf_connect_stress 00:14:24.843 ************************************ 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:24.843 * Looking for test storage... 00:14:24.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.843 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.844 00:39:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:31.435 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:31.436 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:31.436 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:31.436 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:31.436 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.436 00:39:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:14:31.436 00:14:31.436 --- 10.0.0.2 ping statistics --- 00:14:31.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.436 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:14:31.436 00:14:31.436 --- 10.0.0.1 ping statistics --- 00:14:31.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.436 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=326246 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 326246 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 326246 ']' 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.436 00:39:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:31.436 [2024-06-08 00:39:49.334726] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:31.437 [2024-06-08 00:39:49.334790] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.437 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.437 [2024-06-08 00:39:49.421226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.437 [2024-06-08 00:39:49.515216] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.437 [2024-06-08 00:39:49.515274] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.437 [2024-06-08 00:39:49.515282] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.437 [2024-06-08 00:39:49.515289] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.437 [2024-06-08 00:39:49.515295] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.437 [2024-06-08 00:39:49.515471] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.437 [2024-06-08 00:39:49.515729] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.437 [2024-06-08 00:39:49.515730] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.009 [2024-06-08 00:39:50.173206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.009 [2024-06-08 00:39:50.197677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.009 NULL1 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=326559 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.009 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:32.270 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.531 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:32.531 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:32.531 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.531 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:32.531 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.792 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:32.792 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:32.792 00:39:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.792 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:32.792 00:39:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.052 00:39:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.052 00:39:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:33.052 00:39:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.052 00:39:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.052 00:39:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.623 00:39:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.623 00:39:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:33.623 00:39:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.623 00:39:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.623 00:39:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.884 00:39:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.884 00:39:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:33.884 00:39:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.884 00:39:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.884 00:39:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.144 00:39:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.144 00:39:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:34.144 00:39:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.144 00:39:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.144 00:39:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.405 00:39:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.405 00:39:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:34.405 00:39:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.405 00:39:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.406 00:39:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.667 00:39:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.667 00:39:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:34.667 00:39:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.667 00:39:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.667 00:39:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.237 00:39:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.237 00:39:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:35.237 00:39:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.237 00:39:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.237 00:39:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.498 00:39:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.498 00:39:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:35.498 00:39:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.498 00:39:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.498 00:39:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.758 00:39:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.759 00:39:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:35.759 00:39:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.759 00:39:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.759 00:39:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.019 00:39:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:36.019 00:39:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:36.019 00:39:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.019 00:39:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:36.019 00:39:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.280 00:39:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:36.280 00:39:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:36.280 00:39:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.280 00:39:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:36.280 00:39:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.852 00:39:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:36.852 00:39:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:36.852 00:39:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.852 00:39:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:36.852 00:39:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.113 00:39:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:37.113 00:39:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:37.113 00:39:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.113 00:39:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:37.113 00:39:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.373 00:39:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:37.373 00:39:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:37.373 00:39:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.373 00:39:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:37.373 00:39:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.634 00:39:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:37.634 00:39:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:37.634 00:39:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.634 00:39:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:37.634 00:39:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.894 00:39:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:37.894 00:39:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:37.894 00:39:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.894 00:39:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:37.894 00:39:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.465 00:39:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:38.465 00:39:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:38.465 00:39:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.465 00:39:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:38.465 00:39:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.725 00:39:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:38.725 00:39:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:38.725 00:39:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.725 00:39:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:38.725 00:39:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.986 00:39:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:38.986 00:39:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:38.986 00:39:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.986 00:39:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:38.986 00:39:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.275 00:39:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:39.275 00:39:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:39.275 00:39:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.275 00:39:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:39.275 00:39:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.536 00:39:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:39.536 00:39:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:39.536 00:39:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.536 00:39:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:39.536 00:39:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.107 00:39:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:40.107 00:39:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:40.107 00:39:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.107 00:39:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:40.107 00:39:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.368 00:39:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:40.368 00:39:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:40.368 00:39:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.368 00:39:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:40.368 00:39:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.628 00:39:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:40.628 00:39:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:40.628 00:39:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.628 00:39:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:40.628 00:39:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.889 00:39:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:40.889 00:39:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:40.889 00:39:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.889 00:39:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:40.889 00:39:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.150 00:39:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.411 00:39:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:41.411 00:39:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.411 00:39:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.411 00:39:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.672 00:39:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.672 00:39:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:41.672 00:39:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.672 00:39:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.672 00:39:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.932 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.932 00:40:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:41.932 00:40:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.932 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.932 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.193 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.193 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.193 00:40:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 326559 00:14:42.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (326559) - No such process 00:14:42.193 00:40:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 326559 00:14:42.193 00:40:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:42.193 00:40:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:42.193 00:40:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:42.194 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:42.194 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:42.194 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:42.194 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:42.194 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:42.194 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:42.194 rmmod nvme_tcp 00:14:42.194 rmmod nvme_fabrics 00:14:42.194 rmmod nvme_keyring 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 326246 ']' 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 326246 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 326246 ']' 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 326246 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 326246 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 326246' 00:14:42.455 killing process with pid 326246 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 326246 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 326246 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.455 00:40:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.005 00:40:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:45.005 00:14:45.005 real 0m20.031s 00:14:45.005 user 0m41.771s 00:14:45.005 sys 0m8.206s 00:14:45.005 00:40:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:45.005 00:40:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.005 ************************************ 00:14:45.005 END TEST nvmf_connect_stress 00:14:45.005 ************************************ 00:14:45.005 00:40:02 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:45.005 00:40:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:45.005 00:40:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:45.005 00:40:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:45.005 ************************************ 00:14:45.005 START TEST nvmf_fused_ordering 00:14:45.005 ************************************ 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:45.005 * Looking for test storage... 00:14:45.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.005 00:40:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.006 00:40:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.006 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:45.006 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:45.006 00:40:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:45.006 00:40:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:51.594 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:51.594 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:51.594 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:51.595 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:51.595 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.595 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:14:51.857 00:14:51.857 --- 10.0.0.2 ping statistics --- 00:14:51.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.857 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:14:51.857 00:14:51.857 --- 10.0.0.1 ping statistics --- 00:14:51.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.857 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=332582 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 332582 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 332582 ']' 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:51.857 00:40:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.857 [2024-06-08 00:40:10.043223] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:51.857 [2024-06-08 00:40:10.043291] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.857 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.857 [2024-06-08 00:40:10.133098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.119 [2024-06-08 00:40:10.230048] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.119 [2024-06-08 00:40:10.230104] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.119 [2024-06-08 00:40:10.230112] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.119 [2024-06-08 00:40:10.230119] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.119 [2024-06-08 00:40:10.230125] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.119 [2024-06-08 00:40:10.230150] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.691 [2024-06-08 00:40:10.877674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.691 [2024-06-08 00:40:10.893879] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.691 NULL1 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.691 00:40:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:52.691 [2024-06-08 00:40:10.949346] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:52.691 [2024-06-08 00:40:10.949390] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332926 ] 00:14:52.952 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.214 Attached to nqn.2016-06.io.spdk:cnode1 00:14:53.214 Namespace ID: 1 size: 1GB 00:14:53.214 fused_ordering(0) 00:14:53.214 fused_ordering(1) 00:14:53.214 fused_ordering(2) 00:14:53.214 fused_ordering(3) 00:14:53.214 fused_ordering(4) 00:14:53.214 fused_ordering(5) 00:14:53.214 fused_ordering(6) 00:14:53.214 fused_ordering(7) 00:14:53.214 fused_ordering(8) 00:14:53.214 fused_ordering(9) 00:14:53.214 fused_ordering(10) 00:14:53.214 fused_ordering(11) 00:14:53.214 fused_ordering(12) 00:14:53.214 fused_ordering(13) 00:14:53.214 fused_ordering(14) 00:14:53.214 fused_ordering(15) 00:14:53.214 fused_ordering(16) 00:14:53.214 fused_ordering(17) 00:14:53.214 fused_ordering(18) 00:14:53.214 fused_ordering(19) 00:14:53.214 fused_ordering(20) 00:14:53.214 fused_ordering(21) 00:14:53.214 fused_ordering(22) 00:14:53.214 fused_ordering(23) 00:14:53.214 fused_ordering(24) 00:14:53.214 fused_ordering(25) 00:14:53.214 fused_ordering(26) 00:14:53.214 fused_ordering(27) 00:14:53.214 fused_ordering(28) 00:14:53.214 fused_ordering(29) 00:14:53.214 fused_ordering(30) 00:14:53.214 fused_ordering(31) 00:14:53.214 fused_ordering(32) 00:14:53.214 fused_ordering(33) 00:14:53.214 fused_ordering(34) 00:14:53.214 fused_ordering(35) 00:14:53.214 fused_ordering(36) 00:14:53.214 fused_ordering(37) 00:14:53.214 fused_ordering(38) 00:14:53.214 fused_ordering(39) 00:14:53.214 fused_ordering(40) 00:14:53.214 fused_ordering(41) 00:14:53.214 fused_ordering(42) 00:14:53.214 fused_ordering(43) 00:14:53.214 fused_ordering(44) 00:14:53.214 fused_ordering(45) 00:14:53.214 fused_ordering(46) 00:14:53.214 fused_ordering(47) 00:14:53.214 fused_ordering(48) 00:14:53.214 fused_ordering(49) 00:14:53.214 fused_ordering(50) 00:14:53.214 fused_ordering(51) 00:14:53.214 fused_ordering(52) 00:14:53.214 fused_ordering(53) 00:14:53.214 fused_ordering(54) 00:14:53.214 fused_ordering(55) 00:14:53.214 fused_ordering(56) 00:14:53.214 fused_ordering(57) 00:14:53.214 fused_ordering(58) 00:14:53.214 fused_ordering(59) 00:14:53.214 fused_ordering(60) 00:14:53.214 fused_ordering(61) 00:14:53.214 fused_ordering(62) 00:14:53.214 fused_ordering(63) 00:14:53.214 fused_ordering(64) 00:14:53.214 fused_ordering(65) 00:14:53.214 fused_ordering(66) 00:14:53.214 fused_ordering(67) 00:14:53.214 fused_ordering(68) 00:14:53.214 fused_ordering(69) 00:14:53.214 fused_ordering(70) 00:14:53.214 fused_ordering(71) 00:14:53.214 fused_ordering(72) 00:14:53.214 fused_ordering(73) 00:14:53.214 fused_ordering(74) 00:14:53.214 fused_ordering(75) 00:14:53.214 fused_ordering(76) 00:14:53.214 fused_ordering(77) 00:14:53.214 fused_ordering(78) 00:14:53.214 fused_ordering(79) 00:14:53.214 fused_ordering(80) 00:14:53.214 fused_ordering(81) 00:14:53.214 fused_ordering(82) 00:14:53.214 fused_ordering(83) 00:14:53.214 fused_ordering(84) 00:14:53.214 fused_ordering(85) 00:14:53.214 fused_ordering(86) 00:14:53.214 fused_ordering(87) 00:14:53.214 fused_ordering(88) 00:14:53.214 fused_ordering(89) 00:14:53.214 fused_ordering(90) 00:14:53.214 fused_ordering(91) 00:14:53.214 fused_ordering(92) 00:14:53.214 fused_ordering(93) 00:14:53.214 fused_ordering(94) 00:14:53.214 fused_ordering(95) 00:14:53.214 fused_ordering(96) 00:14:53.214 fused_ordering(97) 00:14:53.214 fused_ordering(98) 00:14:53.214 fused_ordering(99) 00:14:53.214 fused_ordering(100) 00:14:53.214 fused_ordering(101) 00:14:53.214 fused_ordering(102) 00:14:53.214 fused_ordering(103) 00:14:53.214 fused_ordering(104) 00:14:53.214 fused_ordering(105) 00:14:53.214 fused_ordering(106) 00:14:53.214 fused_ordering(107) 00:14:53.214 fused_ordering(108) 00:14:53.214 fused_ordering(109) 00:14:53.214 fused_ordering(110) 00:14:53.214 fused_ordering(111) 00:14:53.214 fused_ordering(112) 00:14:53.214 fused_ordering(113) 00:14:53.214 fused_ordering(114) 00:14:53.214 fused_ordering(115) 00:14:53.214 fused_ordering(116) 00:14:53.214 fused_ordering(117) 00:14:53.214 fused_ordering(118) 00:14:53.214 fused_ordering(119) 00:14:53.214 fused_ordering(120) 00:14:53.214 fused_ordering(121) 00:14:53.214 fused_ordering(122) 00:14:53.214 fused_ordering(123) 00:14:53.214 fused_ordering(124) 00:14:53.214 fused_ordering(125) 00:14:53.214 fused_ordering(126) 00:14:53.214 fused_ordering(127) 00:14:53.214 fused_ordering(128) 00:14:53.214 fused_ordering(129) 00:14:53.214 fused_ordering(130) 00:14:53.214 fused_ordering(131) 00:14:53.214 fused_ordering(132) 00:14:53.214 fused_ordering(133) 00:14:53.214 fused_ordering(134) 00:14:53.214 fused_ordering(135) 00:14:53.214 fused_ordering(136) 00:14:53.214 fused_ordering(137) 00:14:53.214 fused_ordering(138) 00:14:53.214 fused_ordering(139) 00:14:53.214 fused_ordering(140) 00:14:53.214 fused_ordering(141) 00:14:53.214 fused_ordering(142) 00:14:53.214 fused_ordering(143) 00:14:53.214 fused_ordering(144) 00:14:53.214 fused_ordering(145) 00:14:53.214 fused_ordering(146) 00:14:53.214 fused_ordering(147) 00:14:53.214 fused_ordering(148) 00:14:53.214 fused_ordering(149) 00:14:53.214 fused_ordering(150) 00:14:53.214 fused_ordering(151) 00:14:53.214 fused_ordering(152) 00:14:53.214 fused_ordering(153) 00:14:53.214 fused_ordering(154) 00:14:53.214 fused_ordering(155) 00:14:53.214 fused_ordering(156) 00:14:53.214 fused_ordering(157) 00:14:53.214 fused_ordering(158) 00:14:53.214 fused_ordering(159) 00:14:53.214 fused_ordering(160) 00:14:53.214 fused_ordering(161) 00:14:53.214 fused_ordering(162) 00:14:53.214 fused_ordering(163) 00:14:53.214 fused_ordering(164) 00:14:53.214 fused_ordering(165) 00:14:53.214 fused_ordering(166) 00:14:53.214 fused_ordering(167) 00:14:53.214 fused_ordering(168) 00:14:53.214 fused_ordering(169) 00:14:53.214 fused_ordering(170) 00:14:53.214 fused_ordering(171) 00:14:53.214 fused_ordering(172) 00:14:53.214 fused_ordering(173) 00:14:53.214 fused_ordering(174) 00:14:53.214 fused_ordering(175) 00:14:53.214 fused_ordering(176) 00:14:53.214 fused_ordering(177) 00:14:53.214 fused_ordering(178) 00:14:53.214 fused_ordering(179) 00:14:53.214 fused_ordering(180) 00:14:53.214 fused_ordering(181) 00:14:53.214 fused_ordering(182) 00:14:53.214 fused_ordering(183) 00:14:53.214 fused_ordering(184) 00:14:53.214 fused_ordering(185) 00:14:53.214 fused_ordering(186) 00:14:53.214 fused_ordering(187) 00:14:53.214 fused_ordering(188) 00:14:53.214 fused_ordering(189) 00:14:53.214 fused_ordering(190) 00:14:53.214 fused_ordering(191) 00:14:53.214 fused_ordering(192) 00:14:53.214 fused_ordering(193) 00:14:53.214 fused_ordering(194) 00:14:53.214 fused_ordering(195) 00:14:53.214 fused_ordering(196) 00:14:53.214 fused_ordering(197) 00:14:53.214 fused_ordering(198) 00:14:53.214 fused_ordering(199) 00:14:53.214 fused_ordering(200) 00:14:53.214 fused_ordering(201) 00:14:53.214 fused_ordering(202) 00:14:53.215 fused_ordering(203) 00:14:53.215 fused_ordering(204) 00:14:53.215 fused_ordering(205) 00:14:53.787 fused_ordering(206) 00:14:53.787 fused_ordering(207) 00:14:53.787 fused_ordering(208) 00:14:53.787 fused_ordering(209) 00:14:53.787 fused_ordering(210) 00:14:53.787 fused_ordering(211) 00:14:53.787 fused_ordering(212) 00:14:53.787 fused_ordering(213) 00:14:53.787 fused_ordering(214) 00:14:53.787 fused_ordering(215) 00:14:53.787 fused_ordering(216) 00:14:53.787 fused_ordering(217) 00:14:53.787 fused_ordering(218) 00:14:53.787 fused_ordering(219) 00:14:53.787 fused_ordering(220) 00:14:53.787 fused_ordering(221) 00:14:53.787 fused_ordering(222) 00:14:53.787 fused_ordering(223) 00:14:53.787 fused_ordering(224) 00:14:53.787 fused_ordering(225) 00:14:53.787 fused_ordering(226) 00:14:53.787 fused_ordering(227) 00:14:53.787 fused_ordering(228) 00:14:53.787 fused_ordering(229) 00:14:53.787 fused_ordering(230) 00:14:53.787 fused_ordering(231) 00:14:53.787 fused_ordering(232) 00:14:53.787 fused_ordering(233) 00:14:53.787 fused_ordering(234) 00:14:53.787 fused_ordering(235) 00:14:53.787 fused_ordering(236) 00:14:53.787 fused_ordering(237) 00:14:53.787 fused_ordering(238) 00:14:53.787 fused_ordering(239) 00:14:53.787 fused_ordering(240) 00:14:53.787 fused_ordering(241) 00:14:53.787 fused_ordering(242) 00:14:53.787 fused_ordering(243) 00:14:53.787 fused_ordering(244) 00:14:53.787 fused_ordering(245) 00:14:53.787 fused_ordering(246) 00:14:53.787 fused_ordering(247) 00:14:53.787 fused_ordering(248) 00:14:53.787 fused_ordering(249) 00:14:53.787 fused_ordering(250) 00:14:53.787 fused_ordering(251) 00:14:53.787 fused_ordering(252) 00:14:53.787 fused_ordering(253) 00:14:53.787 fused_ordering(254) 00:14:53.787 fused_ordering(255) 00:14:53.787 fused_ordering(256) 00:14:53.787 fused_ordering(257) 00:14:53.787 fused_ordering(258) 00:14:53.787 fused_ordering(259) 00:14:53.787 fused_ordering(260) 00:14:53.787 fused_ordering(261) 00:14:53.787 fused_ordering(262) 00:14:53.787 fused_ordering(263) 00:14:53.787 fused_ordering(264) 00:14:53.787 fused_ordering(265) 00:14:53.787 fused_ordering(266) 00:14:53.787 fused_ordering(267) 00:14:53.787 fused_ordering(268) 00:14:53.787 fused_ordering(269) 00:14:53.787 fused_ordering(270) 00:14:53.787 fused_ordering(271) 00:14:53.787 fused_ordering(272) 00:14:53.787 fused_ordering(273) 00:14:53.787 fused_ordering(274) 00:14:53.787 fused_ordering(275) 00:14:53.787 fused_ordering(276) 00:14:53.787 fused_ordering(277) 00:14:53.787 fused_ordering(278) 00:14:53.787 fused_ordering(279) 00:14:53.787 fused_ordering(280) 00:14:53.787 fused_ordering(281) 00:14:53.787 fused_ordering(282) 00:14:53.787 fused_ordering(283) 00:14:53.787 fused_ordering(284) 00:14:53.787 fused_ordering(285) 00:14:53.787 fused_ordering(286) 00:14:53.787 fused_ordering(287) 00:14:53.787 fused_ordering(288) 00:14:53.787 fused_ordering(289) 00:14:53.787 fused_ordering(290) 00:14:53.787 fused_ordering(291) 00:14:53.787 fused_ordering(292) 00:14:53.787 fused_ordering(293) 00:14:53.787 fused_ordering(294) 00:14:53.787 fused_ordering(295) 00:14:53.787 fused_ordering(296) 00:14:53.787 fused_ordering(297) 00:14:53.787 fused_ordering(298) 00:14:53.787 fused_ordering(299) 00:14:53.787 fused_ordering(300) 00:14:53.787 fused_ordering(301) 00:14:53.787 fused_ordering(302) 00:14:53.787 fused_ordering(303) 00:14:53.787 fused_ordering(304) 00:14:53.787 fused_ordering(305) 00:14:53.787 fused_ordering(306) 00:14:53.787 fused_ordering(307) 00:14:53.787 fused_ordering(308) 00:14:53.787 fused_ordering(309) 00:14:53.787 fused_ordering(310) 00:14:53.787 fused_ordering(311) 00:14:53.787 fused_ordering(312) 00:14:53.787 fused_ordering(313) 00:14:53.787 fused_ordering(314) 00:14:53.787 fused_ordering(315) 00:14:53.787 fused_ordering(316) 00:14:53.787 fused_ordering(317) 00:14:53.787 fused_ordering(318) 00:14:53.787 fused_ordering(319) 00:14:53.787 fused_ordering(320) 00:14:53.787 fused_ordering(321) 00:14:53.787 fused_ordering(322) 00:14:53.787 fused_ordering(323) 00:14:53.787 fused_ordering(324) 00:14:53.787 fused_ordering(325) 00:14:53.787 fused_ordering(326) 00:14:53.787 fused_ordering(327) 00:14:53.787 fused_ordering(328) 00:14:53.787 fused_ordering(329) 00:14:53.787 fused_ordering(330) 00:14:53.787 fused_ordering(331) 00:14:53.787 fused_ordering(332) 00:14:53.787 fused_ordering(333) 00:14:53.787 fused_ordering(334) 00:14:53.787 fused_ordering(335) 00:14:53.787 fused_ordering(336) 00:14:53.787 fused_ordering(337) 00:14:53.787 fused_ordering(338) 00:14:53.787 fused_ordering(339) 00:14:53.787 fused_ordering(340) 00:14:53.787 fused_ordering(341) 00:14:53.787 fused_ordering(342) 00:14:53.787 fused_ordering(343) 00:14:53.787 fused_ordering(344) 00:14:53.787 fused_ordering(345) 00:14:53.787 fused_ordering(346) 00:14:53.787 fused_ordering(347) 00:14:53.787 fused_ordering(348) 00:14:53.787 fused_ordering(349) 00:14:53.787 fused_ordering(350) 00:14:53.787 fused_ordering(351) 00:14:53.787 fused_ordering(352) 00:14:53.787 fused_ordering(353) 00:14:53.787 fused_ordering(354) 00:14:53.787 fused_ordering(355) 00:14:53.787 fused_ordering(356) 00:14:53.787 fused_ordering(357) 00:14:53.787 fused_ordering(358) 00:14:53.787 fused_ordering(359) 00:14:53.787 fused_ordering(360) 00:14:53.787 fused_ordering(361) 00:14:53.787 fused_ordering(362) 00:14:53.787 fused_ordering(363) 00:14:53.787 fused_ordering(364) 00:14:53.787 fused_ordering(365) 00:14:53.787 fused_ordering(366) 00:14:53.787 fused_ordering(367) 00:14:53.787 fused_ordering(368) 00:14:53.787 fused_ordering(369) 00:14:53.787 fused_ordering(370) 00:14:53.787 fused_ordering(371) 00:14:53.787 fused_ordering(372) 00:14:53.787 fused_ordering(373) 00:14:53.787 fused_ordering(374) 00:14:53.787 fused_ordering(375) 00:14:53.787 fused_ordering(376) 00:14:53.787 fused_ordering(377) 00:14:53.787 fused_ordering(378) 00:14:53.787 fused_ordering(379) 00:14:53.787 fused_ordering(380) 00:14:53.787 fused_ordering(381) 00:14:53.787 fused_ordering(382) 00:14:53.787 fused_ordering(383) 00:14:53.787 fused_ordering(384) 00:14:53.787 fused_ordering(385) 00:14:53.787 fused_ordering(386) 00:14:53.787 fused_ordering(387) 00:14:53.787 fused_ordering(388) 00:14:53.787 fused_ordering(389) 00:14:53.787 fused_ordering(390) 00:14:53.787 fused_ordering(391) 00:14:53.787 fused_ordering(392) 00:14:53.787 fused_ordering(393) 00:14:53.787 fused_ordering(394) 00:14:53.787 fused_ordering(395) 00:14:53.787 fused_ordering(396) 00:14:53.787 fused_ordering(397) 00:14:53.787 fused_ordering(398) 00:14:53.787 fused_ordering(399) 00:14:53.787 fused_ordering(400) 00:14:53.787 fused_ordering(401) 00:14:53.787 fused_ordering(402) 00:14:53.787 fused_ordering(403) 00:14:53.787 fused_ordering(404) 00:14:53.787 fused_ordering(405) 00:14:53.787 fused_ordering(406) 00:14:53.787 fused_ordering(407) 00:14:53.787 fused_ordering(408) 00:14:53.787 fused_ordering(409) 00:14:53.787 fused_ordering(410) 00:14:54.049 fused_ordering(411) 00:14:54.049 fused_ordering(412) 00:14:54.049 fused_ordering(413) 00:14:54.049 fused_ordering(414) 00:14:54.049 fused_ordering(415) 00:14:54.049 fused_ordering(416) 00:14:54.049 fused_ordering(417) 00:14:54.049 fused_ordering(418) 00:14:54.049 fused_ordering(419) 00:14:54.049 fused_ordering(420) 00:14:54.049 fused_ordering(421) 00:14:54.049 fused_ordering(422) 00:14:54.049 fused_ordering(423) 00:14:54.049 fused_ordering(424) 00:14:54.049 fused_ordering(425) 00:14:54.049 fused_ordering(426) 00:14:54.049 fused_ordering(427) 00:14:54.049 fused_ordering(428) 00:14:54.049 fused_ordering(429) 00:14:54.049 fused_ordering(430) 00:14:54.049 fused_ordering(431) 00:14:54.049 fused_ordering(432) 00:14:54.049 fused_ordering(433) 00:14:54.049 fused_ordering(434) 00:14:54.049 fused_ordering(435) 00:14:54.049 fused_ordering(436) 00:14:54.049 fused_ordering(437) 00:14:54.049 fused_ordering(438) 00:14:54.049 fused_ordering(439) 00:14:54.049 fused_ordering(440) 00:14:54.049 fused_ordering(441) 00:14:54.049 fused_ordering(442) 00:14:54.049 fused_ordering(443) 00:14:54.049 fused_ordering(444) 00:14:54.049 fused_ordering(445) 00:14:54.049 fused_ordering(446) 00:14:54.049 fused_ordering(447) 00:14:54.049 fused_ordering(448) 00:14:54.049 fused_ordering(449) 00:14:54.049 fused_ordering(450) 00:14:54.049 fused_ordering(451) 00:14:54.049 fused_ordering(452) 00:14:54.049 fused_ordering(453) 00:14:54.049 fused_ordering(454) 00:14:54.049 fused_ordering(455) 00:14:54.049 fused_ordering(456) 00:14:54.049 fused_ordering(457) 00:14:54.049 fused_ordering(458) 00:14:54.049 fused_ordering(459) 00:14:54.049 fused_ordering(460) 00:14:54.049 fused_ordering(461) 00:14:54.049 fused_ordering(462) 00:14:54.049 fused_ordering(463) 00:14:54.049 fused_ordering(464) 00:14:54.049 fused_ordering(465) 00:14:54.049 fused_ordering(466) 00:14:54.049 fused_ordering(467) 00:14:54.049 fused_ordering(468) 00:14:54.049 fused_ordering(469) 00:14:54.049 fused_ordering(470) 00:14:54.049 fused_ordering(471) 00:14:54.049 fused_ordering(472) 00:14:54.049 fused_ordering(473) 00:14:54.049 fused_ordering(474) 00:14:54.049 fused_ordering(475) 00:14:54.049 fused_ordering(476) 00:14:54.049 fused_ordering(477) 00:14:54.049 fused_ordering(478) 00:14:54.049 fused_ordering(479) 00:14:54.049 fused_ordering(480) 00:14:54.049 fused_ordering(481) 00:14:54.049 fused_ordering(482) 00:14:54.049 fused_ordering(483) 00:14:54.049 fused_ordering(484) 00:14:54.049 fused_ordering(485) 00:14:54.049 fused_ordering(486) 00:14:54.049 fused_ordering(487) 00:14:54.049 fused_ordering(488) 00:14:54.049 fused_ordering(489) 00:14:54.049 fused_ordering(490) 00:14:54.049 fused_ordering(491) 00:14:54.049 fused_ordering(492) 00:14:54.049 fused_ordering(493) 00:14:54.049 fused_ordering(494) 00:14:54.049 fused_ordering(495) 00:14:54.049 fused_ordering(496) 00:14:54.049 fused_ordering(497) 00:14:54.049 fused_ordering(498) 00:14:54.049 fused_ordering(499) 00:14:54.049 fused_ordering(500) 00:14:54.049 fused_ordering(501) 00:14:54.049 fused_ordering(502) 00:14:54.049 fused_ordering(503) 00:14:54.049 fused_ordering(504) 00:14:54.049 fused_ordering(505) 00:14:54.049 fused_ordering(506) 00:14:54.049 fused_ordering(507) 00:14:54.049 fused_ordering(508) 00:14:54.049 fused_ordering(509) 00:14:54.049 fused_ordering(510) 00:14:54.049 fused_ordering(511) 00:14:54.049 fused_ordering(512) 00:14:54.049 fused_ordering(513) 00:14:54.049 fused_ordering(514) 00:14:54.049 fused_ordering(515) 00:14:54.049 fused_ordering(516) 00:14:54.049 fused_ordering(517) 00:14:54.049 fused_ordering(518) 00:14:54.049 fused_ordering(519) 00:14:54.049 fused_ordering(520) 00:14:54.049 fused_ordering(521) 00:14:54.049 fused_ordering(522) 00:14:54.049 fused_ordering(523) 00:14:54.049 fused_ordering(524) 00:14:54.049 fused_ordering(525) 00:14:54.049 fused_ordering(526) 00:14:54.049 fused_ordering(527) 00:14:54.049 fused_ordering(528) 00:14:54.049 fused_ordering(529) 00:14:54.049 fused_ordering(530) 00:14:54.049 fused_ordering(531) 00:14:54.049 fused_ordering(532) 00:14:54.049 fused_ordering(533) 00:14:54.049 fused_ordering(534) 00:14:54.049 fused_ordering(535) 00:14:54.049 fused_ordering(536) 00:14:54.049 fused_ordering(537) 00:14:54.049 fused_ordering(538) 00:14:54.049 fused_ordering(539) 00:14:54.049 fused_ordering(540) 00:14:54.049 fused_ordering(541) 00:14:54.049 fused_ordering(542) 00:14:54.049 fused_ordering(543) 00:14:54.049 fused_ordering(544) 00:14:54.049 fused_ordering(545) 00:14:54.049 fused_ordering(546) 00:14:54.049 fused_ordering(547) 00:14:54.049 fused_ordering(548) 00:14:54.049 fused_ordering(549) 00:14:54.049 fused_ordering(550) 00:14:54.049 fused_ordering(551) 00:14:54.049 fused_ordering(552) 00:14:54.049 fused_ordering(553) 00:14:54.049 fused_ordering(554) 00:14:54.049 fused_ordering(555) 00:14:54.049 fused_ordering(556) 00:14:54.049 fused_ordering(557) 00:14:54.049 fused_ordering(558) 00:14:54.049 fused_ordering(559) 00:14:54.049 fused_ordering(560) 00:14:54.049 fused_ordering(561) 00:14:54.049 fused_ordering(562) 00:14:54.049 fused_ordering(563) 00:14:54.049 fused_ordering(564) 00:14:54.049 fused_ordering(565) 00:14:54.049 fused_ordering(566) 00:14:54.049 fused_ordering(567) 00:14:54.049 fused_ordering(568) 00:14:54.049 fused_ordering(569) 00:14:54.049 fused_ordering(570) 00:14:54.049 fused_ordering(571) 00:14:54.049 fused_ordering(572) 00:14:54.049 fused_ordering(573) 00:14:54.049 fused_ordering(574) 00:14:54.049 fused_ordering(575) 00:14:54.049 fused_ordering(576) 00:14:54.049 fused_ordering(577) 00:14:54.049 fused_ordering(578) 00:14:54.049 fused_ordering(579) 00:14:54.049 fused_ordering(580) 00:14:54.049 fused_ordering(581) 00:14:54.049 fused_ordering(582) 00:14:54.049 fused_ordering(583) 00:14:54.049 fused_ordering(584) 00:14:54.050 fused_ordering(585) 00:14:54.050 fused_ordering(586) 00:14:54.050 fused_ordering(587) 00:14:54.050 fused_ordering(588) 00:14:54.050 fused_ordering(589) 00:14:54.050 fused_ordering(590) 00:14:54.050 fused_ordering(591) 00:14:54.050 fused_ordering(592) 00:14:54.050 fused_ordering(593) 00:14:54.050 fused_ordering(594) 00:14:54.050 fused_ordering(595) 00:14:54.050 fused_ordering(596) 00:14:54.050 fused_ordering(597) 00:14:54.050 fused_ordering(598) 00:14:54.050 fused_ordering(599) 00:14:54.050 fused_ordering(600) 00:14:54.050 fused_ordering(601) 00:14:54.050 fused_ordering(602) 00:14:54.050 fused_ordering(603) 00:14:54.050 fused_ordering(604) 00:14:54.050 fused_ordering(605) 00:14:54.050 fused_ordering(606) 00:14:54.050 fused_ordering(607) 00:14:54.050 fused_ordering(608) 00:14:54.050 fused_ordering(609) 00:14:54.050 fused_ordering(610) 00:14:54.050 fused_ordering(611) 00:14:54.050 fused_ordering(612) 00:14:54.050 fused_ordering(613) 00:14:54.050 fused_ordering(614) 00:14:54.050 fused_ordering(615) 00:14:54.622 fused_ordering(616) 00:14:54.622 fused_ordering(617) 00:14:54.622 fused_ordering(618) 00:14:54.622 fused_ordering(619) 00:14:54.622 fused_ordering(620) 00:14:54.622 fused_ordering(621) 00:14:54.622 fused_ordering(622) 00:14:54.622 fused_ordering(623) 00:14:54.622 fused_ordering(624) 00:14:54.622 fused_ordering(625) 00:14:54.622 fused_ordering(626) 00:14:54.622 fused_ordering(627) 00:14:54.622 fused_ordering(628) 00:14:54.622 fused_ordering(629) 00:14:54.622 fused_ordering(630) 00:14:54.622 fused_ordering(631) 00:14:54.622 fused_ordering(632) 00:14:54.622 fused_ordering(633) 00:14:54.622 fused_ordering(634) 00:14:54.622 fused_ordering(635) 00:14:54.622 fused_ordering(636) 00:14:54.622 fused_ordering(637) 00:14:54.622 fused_ordering(638) 00:14:54.622 fused_ordering(639) 00:14:54.622 fused_ordering(640) 00:14:54.622 fused_ordering(641) 00:14:54.622 fused_ordering(642) 00:14:54.622 fused_ordering(643) 00:14:54.622 fused_ordering(644) 00:14:54.622 fused_ordering(645) 00:14:54.622 fused_ordering(646) 00:14:54.622 fused_ordering(647) 00:14:54.622 fused_ordering(648) 00:14:54.622 fused_ordering(649) 00:14:54.622 fused_ordering(650) 00:14:54.622 fused_ordering(651) 00:14:54.622 fused_ordering(652) 00:14:54.622 fused_ordering(653) 00:14:54.622 fused_ordering(654) 00:14:54.622 fused_ordering(655) 00:14:54.622 fused_ordering(656) 00:14:54.622 fused_ordering(657) 00:14:54.622 fused_ordering(658) 00:14:54.622 fused_ordering(659) 00:14:54.622 fused_ordering(660) 00:14:54.622 fused_ordering(661) 00:14:54.622 fused_ordering(662) 00:14:54.622 fused_ordering(663) 00:14:54.622 fused_ordering(664) 00:14:54.622 fused_ordering(665) 00:14:54.622 fused_ordering(666) 00:14:54.622 fused_ordering(667) 00:14:54.622 fused_ordering(668) 00:14:54.622 fused_ordering(669) 00:14:54.622 fused_ordering(670) 00:14:54.622 fused_ordering(671) 00:14:54.622 fused_ordering(672) 00:14:54.622 fused_ordering(673) 00:14:54.622 fused_ordering(674) 00:14:54.622 fused_ordering(675) 00:14:54.622 fused_ordering(676) 00:14:54.622 fused_ordering(677) 00:14:54.622 fused_ordering(678) 00:14:54.622 fused_ordering(679) 00:14:54.622 fused_ordering(680) 00:14:54.622 fused_ordering(681) 00:14:54.622 fused_ordering(682) 00:14:54.622 fused_ordering(683) 00:14:54.622 fused_ordering(684) 00:14:54.622 fused_ordering(685) 00:14:54.622 fused_ordering(686) 00:14:54.622 fused_ordering(687) 00:14:54.622 fused_ordering(688) 00:14:54.622 fused_ordering(689) 00:14:54.622 fused_ordering(690) 00:14:54.622 fused_ordering(691) 00:14:54.622 fused_ordering(692) 00:14:54.622 fused_ordering(693) 00:14:54.622 fused_ordering(694) 00:14:54.622 fused_ordering(695) 00:14:54.622 fused_ordering(696) 00:14:54.622 fused_ordering(697) 00:14:54.622 fused_ordering(698) 00:14:54.622 fused_ordering(699) 00:14:54.622 fused_ordering(700) 00:14:54.622 fused_ordering(701) 00:14:54.622 fused_ordering(702) 00:14:54.622 fused_ordering(703) 00:14:54.622 fused_ordering(704) 00:14:54.622 fused_ordering(705) 00:14:54.622 fused_ordering(706) 00:14:54.622 fused_ordering(707) 00:14:54.622 fused_ordering(708) 00:14:54.622 fused_ordering(709) 00:14:54.622 fused_ordering(710) 00:14:54.622 fused_ordering(711) 00:14:54.622 fused_ordering(712) 00:14:54.622 fused_ordering(713) 00:14:54.622 fused_ordering(714) 00:14:54.622 fused_ordering(715) 00:14:54.622 fused_ordering(716) 00:14:54.622 fused_ordering(717) 00:14:54.622 fused_ordering(718) 00:14:54.622 fused_ordering(719) 00:14:54.622 fused_ordering(720) 00:14:54.622 fused_ordering(721) 00:14:54.622 fused_ordering(722) 00:14:54.622 fused_ordering(723) 00:14:54.622 fused_ordering(724) 00:14:54.622 fused_ordering(725) 00:14:54.622 fused_ordering(726) 00:14:54.622 fused_ordering(727) 00:14:54.622 fused_ordering(728) 00:14:54.622 fused_ordering(729) 00:14:54.622 fused_ordering(730) 00:14:54.622 fused_ordering(731) 00:14:54.622 fused_ordering(732) 00:14:54.622 fused_ordering(733) 00:14:54.622 fused_ordering(734) 00:14:54.622 fused_ordering(735) 00:14:54.622 fused_ordering(736) 00:14:54.622 fused_ordering(737) 00:14:54.622 fused_ordering(738) 00:14:54.622 fused_ordering(739) 00:14:54.622 fused_ordering(740) 00:14:54.622 fused_ordering(741) 00:14:54.622 fused_ordering(742) 00:14:54.622 fused_ordering(743) 00:14:54.622 fused_ordering(744) 00:14:54.622 fused_ordering(745) 00:14:54.622 fused_ordering(746) 00:14:54.622 fused_ordering(747) 00:14:54.622 fused_ordering(748) 00:14:54.622 fused_ordering(749) 00:14:54.622 fused_ordering(750) 00:14:54.622 fused_ordering(751) 00:14:54.622 fused_ordering(752) 00:14:54.622 fused_ordering(753) 00:14:54.622 fused_ordering(754) 00:14:54.622 fused_ordering(755) 00:14:54.622 fused_ordering(756) 00:14:54.622 fused_ordering(757) 00:14:54.622 fused_ordering(758) 00:14:54.622 fused_ordering(759) 00:14:54.622 fused_ordering(760) 00:14:54.622 fused_ordering(761) 00:14:54.622 fused_ordering(762) 00:14:54.622 fused_ordering(763) 00:14:54.622 fused_ordering(764) 00:14:54.622 fused_ordering(765) 00:14:54.622 fused_ordering(766) 00:14:54.622 fused_ordering(767) 00:14:54.622 fused_ordering(768) 00:14:54.622 fused_ordering(769) 00:14:54.622 fused_ordering(770) 00:14:54.622 fused_ordering(771) 00:14:54.622 fused_ordering(772) 00:14:54.622 fused_ordering(773) 00:14:54.622 fused_ordering(774) 00:14:54.622 fused_ordering(775) 00:14:54.622 fused_ordering(776) 00:14:54.622 fused_ordering(777) 00:14:54.622 fused_ordering(778) 00:14:54.622 fused_ordering(779) 00:14:54.622 fused_ordering(780) 00:14:54.622 fused_ordering(781) 00:14:54.622 fused_ordering(782) 00:14:54.622 fused_ordering(783) 00:14:54.622 fused_ordering(784) 00:14:54.622 fused_ordering(785) 00:14:54.622 fused_ordering(786) 00:14:54.622 fused_ordering(787) 00:14:54.622 fused_ordering(788) 00:14:54.622 fused_ordering(789) 00:14:54.622 fused_ordering(790) 00:14:54.622 fused_ordering(791) 00:14:54.622 fused_ordering(792) 00:14:54.622 fused_ordering(793) 00:14:54.622 fused_ordering(794) 00:14:54.622 fused_ordering(795) 00:14:54.622 fused_ordering(796) 00:14:54.622 fused_ordering(797) 00:14:54.622 fused_ordering(798) 00:14:54.622 fused_ordering(799) 00:14:54.622 fused_ordering(800) 00:14:54.622 fused_ordering(801) 00:14:54.622 fused_ordering(802) 00:14:54.622 fused_ordering(803) 00:14:54.622 fused_ordering(804) 00:14:54.622 fused_ordering(805) 00:14:54.622 fused_ordering(806) 00:14:54.622 fused_ordering(807) 00:14:54.623 fused_ordering(808) 00:14:54.623 fused_ordering(809) 00:14:54.623 fused_ordering(810) 00:14:54.623 fused_ordering(811) 00:14:54.623 fused_ordering(812) 00:14:54.623 fused_ordering(813) 00:14:54.623 fused_ordering(814) 00:14:54.623 fused_ordering(815) 00:14:54.623 fused_ordering(816) 00:14:54.623 fused_ordering(817) 00:14:54.623 fused_ordering(818) 00:14:54.623 fused_ordering(819) 00:14:54.623 fused_ordering(820) 00:14:55.565 fused_ordering(821) 00:14:55.565 fused_ordering(822) 00:14:55.565 fused_ordering(823) 00:14:55.565 fused_ordering(824) 00:14:55.565 fused_ordering(825) 00:14:55.565 fused_ordering(826) 00:14:55.565 fused_ordering(827) 00:14:55.565 fused_ordering(828) 00:14:55.565 fused_ordering(829) 00:14:55.565 fused_ordering(830) 00:14:55.565 fused_ordering(831) 00:14:55.565 fused_ordering(832) 00:14:55.565 fused_ordering(833) 00:14:55.565 fused_ordering(834) 00:14:55.565 fused_ordering(835) 00:14:55.565 fused_ordering(836) 00:14:55.565 fused_ordering(837) 00:14:55.565 fused_ordering(838) 00:14:55.565 fused_ordering(839) 00:14:55.565 fused_ordering(840) 00:14:55.565 fused_ordering(841) 00:14:55.565 fused_ordering(842) 00:14:55.565 fused_ordering(843) 00:14:55.565 fused_ordering(844) 00:14:55.565 fused_ordering(845) 00:14:55.565 fused_ordering(846) 00:14:55.565 fused_ordering(847) 00:14:55.565 fused_ordering(848) 00:14:55.565 fused_ordering(849) 00:14:55.565 fused_ordering(850) 00:14:55.565 fused_ordering(851) 00:14:55.565 fused_ordering(852) 00:14:55.565 fused_ordering(853) 00:14:55.565 fused_ordering(854) 00:14:55.565 fused_ordering(855) 00:14:55.565 fused_ordering(856) 00:14:55.565 fused_ordering(857) 00:14:55.565 fused_ordering(858) 00:14:55.565 fused_ordering(859) 00:14:55.565 fused_ordering(860) 00:14:55.565 fused_ordering(861) 00:14:55.565 fused_ordering(862) 00:14:55.565 fused_ordering(863) 00:14:55.565 fused_ordering(864) 00:14:55.565 fused_ordering(865) 00:14:55.565 fused_ordering(866) 00:14:55.565 fused_ordering(867) 00:14:55.565 fused_ordering(868) 00:14:55.565 fused_ordering(869) 00:14:55.565 fused_ordering(870) 00:14:55.565 fused_ordering(871) 00:14:55.565 fused_ordering(872) 00:14:55.565 fused_ordering(873) 00:14:55.565 fused_ordering(874) 00:14:55.565 fused_ordering(875) 00:14:55.565 fused_ordering(876) 00:14:55.565 fused_ordering(877) 00:14:55.565 fused_ordering(878) 00:14:55.565 fused_ordering(879) 00:14:55.565 fused_ordering(880) 00:14:55.565 fused_ordering(881) 00:14:55.565 fused_ordering(882) 00:14:55.565 fused_ordering(883) 00:14:55.565 fused_ordering(884) 00:14:55.565 fused_ordering(885) 00:14:55.565 fused_ordering(886) 00:14:55.565 fused_ordering(887) 00:14:55.565 fused_ordering(888) 00:14:55.565 fused_ordering(889) 00:14:55.565 fused_ordering(890) 00:14:55.565 fused_ordering(891) 00:14:55.565 fused_ordering(892) 00:14:55.565 fused_ordering(893) 00:14:55.565 fused_ordering(894) 00:14:55.565 fused_ordering(895) 00:14:55.565 fused_ordering(896) 00:14:55.565 fused_ordering(897) 00:14:55.565 fused_ordering(898) 00:14:55.565 fused_ordering(899) 00:14:55.565 fused_ordering(900) 00:14:55.565 fused_ordering(901) 00:14:55.565 fused_ordering(902) 00:14:55.565 fused_ordering(903) 00:14:55.565 fused_ordering(904) 00:14:55.565 fused_ordering(905) 00:14:55.565 fused_ordering(906) 00:14:55.565 fused_ordering(907) 00:14:55.565 fused_ordering(908) 00:14:55.565 fused_ordering(909) 00:14:55.565 fused_ordering(910) 00:14:55.565 fused_ordering(911) 00:14:55.565 fused_ordering(912) 00:14:55.565 fused_ordering(913) 00:14:55.565 fused_ordering(914) 00:14:55.565 fused_ordering(915) 00:14:55.565 fused_ordering(916) 00:14:55.565 fused_ordering(917) 00:14:55.565 fused_ordering(918) 00:14:55.565 fused_ordering(919) 00:14:55.565 fused_ordering(920) 00:14:55.565 fused_ordering(921) 00:14:55.565 fused_ordering(922) 00:14:55.565 fused_ordering(923) 00:14:55.565 fused_ordering(924) 00:14:55.565 fused_ordering(925) 00:14:55.565 fused_ordering(926) 00:14:55.565 fused_ordering(927) 00:14:55.565 fused_ordering(928) 00:14:55.565 fused_ordering(929) 00:14:55.565 fused_ordering(930) 00:14:55.565 fused_ordering(931) 00:14:55.565 fused_ordering(932) 00:14:55.565 fused_ordering(933) 00:14:55.565 fused_ordering(934) 00:14:55.565 fused_ordering(935) 00:14:55.565 fused_ordering(936) 00:14:55.565 fused_ordering(937) 00:14:55.565 fused_ordering(938) 00:14:55.565 fused_ordering(939) 00:14:55.565 fused_ordering(940) 00:14:55.565 fused_ordering(941) 00:14:55.565 fused_ordering(942) 00:14:55.565 fused_ordering(943) 00:14:55.565 fused_ordering(944) 00:14:55.565 fused_ordering(945) 00:14:55.565 fused_ordering(946) 00:14:55.565 fused_ordering(947) 00:14:55.565 fused_ordering(948) 00:14:55.565 fused_ordering(949) 00:14:55.565 fused_ordering(950) 00:14:55.565 fused_ordering(951) 00:14:55.565 fused_ordering(952) 00:14:55.565 fused_ordering(953) 00:14:55.565 fused_ordering(954) 00:14:55.565 fused_ordering(955) 00:14:55.565 fused_ordering(956) 00:14:55.565 fused_ordering(957) 00:14:55.565 fused_ordering(958) 00:14:55.565 fused_ordering(959) 00:14:55.565 fused_ordering(960) 00:14:55.565 fused_ordering(961) 00:14:55.565 fused_ordering(962) 00:14:55.565 fused_ordering(963) 00:14:55.565 fused_ordering(964) 00:14:55.565 fused_ordering(965) 00:14:55.565 fused_ordering(966) 00:14:55.565 fused_ordering(967) 00:14:55.565 fused_ordering(968) 00:14:55.565 fused_ordering(969) 00:14:55.565 fused_ordering(970) 00:14:55.565 fused_ordering(971) 00:14:55.565 fused_ordering(972) 00:14:55.565 fused_ordering(973) 00:14:55.565 fused_ordering(974) 00:14:55.565 fused_ordering(975) 00:14:55.565 fused_ordering(976) 00:14:55.565 fused_ordering(977) 00:14:55.565 fused_ordering(978) 00:14:55.565 fused_ordering(979) 00:14:55.565 fused_ordering(980) 00:14:55.565 fused_ordering(981) 00:14:55.565 fused_ordering(982) 00:14:55.565 fused_ordering(983) 00:14:55.565 fused_ordering(984) 00:14:55.565 fused_ordering(985) 00:14:55.565 fused_ordering(986) 00:14:55.565 fused_ordering(987) 00:14:55.565 fused_ordering(988) 00:14:55.565 fused_ordering(989) 00:14:55.565 fused_ordering(990) 00:14:55.565 fused_ordering(991) 00:14:55.565 fused_ordering(992) 00:14:55.565 fused_ordering(993) 00:14:55.565 fused_ordering(994) 00:14:55.565 fused_ordering(995) 00:14:55.565 fused_ordering(996) 00:14:55.565 fused_ordering(997) 00:14:55.565 fused_ordering(998) 00:14:55.565 fused_ordering(999) 00:14:55.565 fused_ordering(1000) 00:14:55.565 fused_ordering(1001) 00:14:55.565 fused_ordering(1002) 00:14:55.565 fused_ordering(1003) 00:14:55.565 fused_ordering(1004) 00:14:55.565 fused_ordering(1005) 00:14:55.565 fused_ordering(1006) 00:14:55.565 fused_ordering(1007) 00:14:55.565 fused_ordering(1008) 00:14:55.565 fused_ordering(1009) 00:14:55.565 fused_ordering(1010) 00:14:55.565 fused_ordering(1011) 00:14:55.565 fused_ordering(1012) 00:14:55.565 fused_ordering(1013) 00:14:55.565 fused_ordering(1014) 00:14:55.565 fused_ordering(1015) 00:14:55.565 fused_ordering(1016) 00:14:55.565 fused_ordering(1017) 00:14:55.565 fused_ordering(1018) 00:14:55.565 fused_ordering(1019) 00:14:55.565 fused_ordering(1020) 00:14:55.565 fused_ordering(1021) 00:14:55.565 fused_ordering(1022) 00:14:55.565 fused_ordering(1023) 00:14:55.565 00:40:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:55.565 00:40:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:55.565 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.565 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:55.565 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.565 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:55.565 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.565 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.565 rmmod nvme_tcp 00:14:55.565 rmmod nvme_fabrics 00:14:55.566 rmmod nvme_keyring 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 332582 ']' 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 332582 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 332582 ']' 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 332582 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 332582 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 332582' 00:14:55.566 killing process with pid 332582 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 332582 00:14:55.566 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 332582 00:14:55.827 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.827 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.827 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.827 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.827 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.827 00:40:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.827 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.827 00:40:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.741 00:40:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:57.741 00:14:57.741 real 0m13.141s 00:14:57.741 user 0m7.124s 00:14:57.741 sys 0m6.995s 00:14:57.741 00:40:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:57.741 00:40:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.741 ************************************ 00:14:57.741 END TEST nvmf_fused_ordering 00:14:57.741 ************************************ 00:14:57.741 00:40:15 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:57.741 00:40:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:57.741 00:40:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:57.741 00:40:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:57.741 ************************************ 00:14:57.741 START TEST nvmf_delete_subsystem 00:14:57.741 ************************************ 00:14:57.741 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:58.002 * Looking for test storage... 00:14:58.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.002 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.002 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:58.002 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.002 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.002 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.002 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.002 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.002 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.002 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.003 00:40:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:06.175 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.175 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:06.176 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:06.176 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:06.176 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.176 00:40:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:06.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.771 ms 00:15:06.176 00:15:06.176 --- 10.0.0.2 ping statistics --- 00:15:06.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.176 rtt min/avg/max/mdev = 0.771/0.771/0.771/0.000 ms 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:15:06.176 00:15:06.176 --- 10.0.0.1 ping statistics --- 00:15:06.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.176 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=337580 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 337580 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 337580 ']' 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:06.176 00:40:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.176 [2024-06-08 00:40:23.339821] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:06.176 [2024-06-08 00:40:23.339884] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.176 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.176 [2024-06-08 00:40:23.412102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:06.176 [2024-06-08 00:40:23.486177] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.176 [2024-06-08 00:40:23.486216] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.176 [2024-06-08 00:40:23.486223] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.176 [2024-06-08 00:40:23.486230] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.176 [2024-06-08 00:40:23.486236] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.176 [2024-06-08 00:40:23.486380] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.176 [2024-06-08 00:40:23.486380] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.176 [2024-06-08 00:40:24.166039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.176 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.177 [2024-06-08 00:40:24.182190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.177 NULL1 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.177 Delay0 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=337711 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:06.177 00:40:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:06.177 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.177 [2024-06-08 00:40:24.256802] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:08.092 00:40:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.093 00:40:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.093 00:40:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 [2024-06-08 00:40:26.341720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9be040 is same with the state(5) to be set 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 [2024-06-08 00:40:26.342087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdc80 is same with the state(5) to be set 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 starting I/O failed: -6 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 [2024-06-08 00:40:26.345207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa73800bfe0 is same with the state(5) to be set 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Write completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.093 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Write completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Write completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Write completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Write completed with error (sct=0, sc=8) 00:15:08.094 Write completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Write completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Write completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:08.094 Read completed with error (sct=0, sc=8) 00:15:09.036 [2024-06-08 00:40:27.314612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99d550 is same with the state(5) to be set 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 [2024-06-08 00:40:27.345914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bde60 is same with the state(5) to be set 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 [2024-06-08 00:40:27.347267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9be220 is same with the state(5) to be set 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 [2024-06-08 00:40:27.347883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa738000c00 is same with the state(5) to be set 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Write completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 Read completed with error (sct=0, sc=8) 00:15:09.298 [2024-06-08 00:40:27.348107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa73800c2f0 is same with the state(5) to be set 00:15:09.298 Initializing NVMe Controllers 00:15:09.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:09.298 Controller IO queue size 128, less than required. 00:15:09.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:09.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:09.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:09.298 Initialization complete. Launching workers. 00:15:09.298 ======================================================== 00:15:09.298 Latency(us) 00:15:09.298 Device Information : IOPS MiB/s Average min max 00:15:09.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.18 0.09 892890.89 373.20 1007962.41 00:15:09.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 144.38 0.07 973944.46 282.94 2002360.42 00:15:09.298 ======================================================== 00:15:09.298 Total : 333.56 0.16 927973.78 282.94 2002360.42 00:15:09.298 00:15:09.298 [2024-06-08 00:40:27.348683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99d550 (9): Bad file descriptor 00:15:09.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:09.298 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.298 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:09.298 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 337711 00:15:09.298 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 337711 00:15:09.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (337711) - No such process 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 337711 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 337711 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 337711 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:09.870 [2024-06-08 00:40:27.878905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=338510 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 338510 00:15:09.870 00:40:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:09.870 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.870 [2024-06-08 00:40:27.949491] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:10.131 00:40:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:10.131 00:40:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 338510 00:15:10.131 00:40:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:10.702 00:40:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:10.702 00:40:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 338510 00:15:10.702 00:40:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:11.273 00:40:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:11.273 00:40:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 338510 00:15:11.273 00:40:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:11.843 00:40:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:11.843 00:40:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 338510 00:15:11.843 00:40:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:12.414 00:40:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:12.414 00:40:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 338510 00:15:12.414 00:40:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:12.676 00:40:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:12.676 00:40:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 338510 00:15:12.676 00:40:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:12.937 Initializing NVMe Controllers 00:15:12.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.937 Controller IO queue size 128, less than required. 00:15:12.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:12.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:12.937 Initialization complete. Launching workers. 00:15:12.937 ======================================================== 00:15:12.937 Latency(us) 00:15:12.937 Device Information : IOPS MiB/s Average min max 00:15:12.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002010.62 1000182.33 1005437.50 00:15:12.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002997.67 1000284.07 1009283.19 00:15:12.937 ======================================================== 00:15:12.937 Total : 256.00 0.12 1002504.15 1000182.33 1009283.19 00:15:12.937 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 338510 00:15:13.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (338510) - No such process 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 338510 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.198 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.198 rmmod nvme_tcp 00:15:13.198 rmmod nvme_fabrics 00:15:13.459 rmmod nvme_keyring 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 337580 ']' 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 337580 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 337580 ']' 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 337580 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 337580 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 337580' 00:15:13.459 killing process with pid 337580 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 337580 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 337580 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.459 00:40:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.007 00:40:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.007 00:15:16.007 real 0m17.750s 00:15:16.007 user 0m30.449s 00:15:16.007 sys 0m6.156s 00:15:16.007 00:40:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:16.008 00:40:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:16.008 ************************************ 00:15:16.008 END TEST nvmf_delete_subsystem 00:15:16.008 ************************************ 00:15:16.008 00:40:33 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:16.008 00:40:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:16.008 00:40:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:16.008 00:40:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.008 ************************************ 00:15:16.008 START TEST nvmf_ns_masking 00:15:16.008 ************************************ 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:16.008 * Looking for test storage... 00:15:16.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=61bad9d6-7763-4073-90a9-201bd7bb741c 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.008 00:40:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:22.600 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.600 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:22.601 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:22.601 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:22.601 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.601 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.863 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.863 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.863 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.863 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.863 00:40:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:15:22.863 00:15:22.863 --- 10.0.0.2 ping statistics --- 00:15:22.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.863 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:15:22.863 00:15:22.863 --- 10.0.0.1 ping statistics --- 00:15:22.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.863 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=343282 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 343282 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 343282 ']' 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:22.863 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:22.863 [2024-06-08 00:40:41.144086] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:22.863 [2024-06-08 00:40:41.144146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.124 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.124 [2024-06-08 00:40:41.213760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.124 [2024-06-08 00:40:41.289771] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.124 [2024-06-08 00:40:41.289807] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.124 [2024-06-08 00:40:41.289815] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.124 [2024-06-08 00:40:41.289821] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.124 [2024-06-08 00:40:41.289827] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.124 [2024-06-08 00:40:41.289960] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.124 [2024-06-08 00:40:41.290083] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.124 [2024-06-08 00:40:41.290228] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.124 [2024-06-08 00:40:41.290229] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.697 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:23.697 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:15:23.697 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.697 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:23.697 00:40:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:23.697 00:40:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.697 00:40:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:23.958 [2024-06-08 00:40:42.108460] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.958 00:40:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:23.958 00:40:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:23.958 00:40:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:24.218 Malloc1 00:15:24.218 00:40:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:24.218 Malloc2 00:15:24.218 00:40:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:24.506 00:40:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:24.778 00:40:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.778 [2024-06-08 00:40:42.945069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.778 00:40:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:24.778 00:40:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61bad9d6-7763-4073-90a9-201bd7bb741c -a 10.0.0.2 -s 4420 -i 4 00:15:25.038 00:40:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:25.038 00:40:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:15:25.038 00:40:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.038 00:40:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:25.038 00:40:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:26.949 [ 0]:0x1 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:26.949 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:27.209 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17bc1330a98444aa9d41f1aeec401639 00:15:27.209 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17bc1330a98444aa9d41f1aeec401639 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.209 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:27.209 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:27.209 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:27.209 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:27.209 [ 0]:0x1 00:15:27.209 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:27.209 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17bc1330a98444aa9d41f1aeec401639 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17bc1330a98444aa9d41f1aeec401639 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:27.469 [ 1]:0x2 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6de90ea335894413860100f780066236 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6de90ea335894413860100f780066236 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.469 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.730 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:27.730 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:27.730 00:40:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61bad9d6-7763-4073-90a9-201bd7bb741c -a 10.0.0.2 -s 4420 -i 4 00:15:27.990 00:40:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:27.990 00:40:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:15:27.990 00:40:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:27.990 00:40:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:15:27.990 00:40:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:15:27.990 00:40:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:29.901 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:29.901 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:29.901 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:30.162 [ 0]:0x2 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:30.162 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6de90ea335894413860100f780066236 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6de90ea335894413860100f780066236 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:30.423 [ 0]:0x1 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17bc1330a98444aa9d41f1aeec401639 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17bc1330a98444aa9d41f1aeec401639 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:30.423 [ 1]:0x2 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:30.423 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6de90ea335894413860100f780066236 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6de90ea335894413860100f780066236 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:30.684 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:30.945 [ 0]:0x2 00:15:30.945 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:30.945 00:40:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:30.945 00:40:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6de90ea335894413860100f780066236 00:15:30.945 00:40:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6de90ea335894413860100f780066236 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:30.945 00:40:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:30.945 00:40:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:30.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.945 00:40:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:31.206 00:40:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:31.206 00:40:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 61bad9d6-7763-4073-90a9-201bd7bb741c -a 10.0.0.2 -s 4420 -i 4 00:15:31.206 00:40:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:31.206 00:40:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:15:31.206 00:40:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:31.206 00:40:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:15:31.206 00:40:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:15:31.206 00:40:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:33.120 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:33.120 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:33.120 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:33.120 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:15:33.120 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:33.120 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:33.120 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:33.120 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:33.381 [ 0]:0x1 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=17bc1330a98444aa9d41f1aeec401639 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 17bc1330a98444aa9d41f1aeec401639 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:33.381 [ 1]:0x2 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6de90ea335894413860100f780066236 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6de90ea335894413860100f780066236 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:33.381 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:33.643 [ 0]:0x2 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6de90ea335894413860100f780066236 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6de90ea335894413860100f780066236 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:33.643 00:40:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:33.904 [2024-06-08 00:40:52.005860] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:33.904 request: 00:15:33.904 { 00:15:33.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.904 "nsid": 2, 00:15:33.904 "host": "nqn.2016-06.io.spdk:host1", 00:15:33.904 "method": "nvmf_ns_remove_host", 00:15:33.904 "req_id": 1 00:15:33.904 } 00:15:33.904 Got JSON-RPC error response 00:15:33.904 response: 00:15:33.904 { 00:15:33.904 "code": -32602, 00:15:33.904 "message": "Invalid parameters" 00:15:33.904 } 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:33.904 [ 0]:0x2 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:33.904 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:34.166 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6de90ea335894413860100f780066236 00:15:34.166 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6de90ea335894413860100f780066236 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:34.166 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:34.166 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:34.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.166 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.427 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:34.427 00:40:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:34.427 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.427 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:34.427 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.427 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.428 rmmod nvme_tcp 00:15:34.428 rmmod nvme_fabrics 00:15:34.428 rmmod nvme_keyring 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 343282 ']' 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 343282 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 343282 ']' 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 343282 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 343282 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 343282' 00:15:34.428 killing process with pid 343282 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 343282 00:15:34.428 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 343282 00:15:34.689 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.689 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.689 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.689 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.689 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.689 00:40:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.689 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.689 00:40:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.601 00:40:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.601 00:15:36.601 real 0m20.996s 00:15:36.601 user 0m50.468s 00:15:36.601 sys 0m6.751s 00:15:36.601 00:40:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:36.601 00:40:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.601 ************************************ 00:15:36.601 END TEST nvmf_ns_masking 00:15:36.601 ************************************ 00:15:36.601 00:40:54 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:36.601 00:40:54 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:36.601 00:40:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:36.601 00:40:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:36.601 00:40:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:36.863 ************************************ 00:15:36.863 START TEST nvmf_nvme_cli 00:15:36.863 ************************************ 00:15:36.863 00:40:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:36.863 * Looking for test storage... 00:15:36.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.863 00:40:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:45.003 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:45.003 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:45.003 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:45.003 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:45.004 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:45.004 00:41:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:45.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.814 ms 00:15:45.004 00:15:45.004 --- 10.0.0.2 ping statistics --- 00:15:45.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.004 rtt min/avg/max/mdev = 0.814/0.814/0.814/0.000 ms 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:45.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:15:45.004 00:15:45.004 --- 10.0.0.1 ping statistics --- 00:15:45.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.004 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=349786 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 349786 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 349786 ']' 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 [2024-06-08 00:41:02.123348] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:45.004 [2024-06-08 00:41:02.123395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.004 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.004 [2024-06-08 00:41:02.191020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.004 [2024-06-08 00:41:02.256324] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.004 [2024-06-08 00:41:02.256360] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.004 [2024-06-08 00:41:02.256368] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.004 [2024-06-08 00:41:02.256374] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.004 [2024-06-08 00:41:02.256380] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.004 [2024-06-08 00:41:02.259045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.004 [2024-06-08 00:41:02.259244] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.004 [2024-06-08 00:41:02.259410] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.004 [2024-06-08 00:41:02.259453] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 [2024-06-08 00:41:02.977120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.004 00:41:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 Malloc0 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 Malloc1 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 [2024-06-08 00:41:03.064151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.004 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:45.004 00:15:45.004 Discovery Log Number of Records 2, Generation counter 2 00:15:45.004 =====Discovery Log Entry 0====== 00:15:45.004 trtype: tcp 00:15:45.004 adrfam: ipv4 00:15:45.004 subtype: current discovery subsystem 00:15:45.004 treq: not required 00:15:45.004 portid: 0 00:15:45.004 trsvcid: 4420 00:15:45.004 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:45.004 traddr: 10.0.0.2 00:15:45.004 eflags: explicit discovery connections, duplicate discovery information 00:15:45.004 sectype: none 00:15:45.004 =====Discovery Log Entry 1====== 00:15:45.004 trtype: tcp 00:15:45.004 adrfam: ipv4 00:15:45.004 subtype: nvme subsystem 00:15:45.004 treq: not required 00:15:45.004 portid: 0 00:15:45.005 trsvcid: 4420 00:15:45.005 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:45.005 traddr: 10.0.0.2 00:15:45.005 eflags: none 00:15:45.005 sectype: none 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:45.005 00:41:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:46.957 00:41:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:46.957 00:41:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:15:46.957 00:41:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:46.957 00:41:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:15:46.957 00:41:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:15:46.957 00:41:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:48.890 /dev/nvme0n1 ]] 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.890 00:41:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:48.890 00:41:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:49.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.151 rmmod nvme_tcp 00:15:49.151 rmmod nvme_fabrics 00:15:49.151 rmmod nvme_keyring 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 349786 ']' 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 349786 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 349786 ']' 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 349786 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:49.151 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 349786 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 349786' 00:15:49.411 killing process with pid 349786 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 349786 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 349786 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.411 00:41:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.956 00:41:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:51.956 00:15:51.956 real 0m14.795s 00:15:51.956 user 0m23.493s 00:15:51.956 sys 0m5.799s 00:15:51.956 00:41:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:51.956 00:41:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:51.956 ************************************ 00:15:51.956 END TEST nvmf_nvme_cli 00:15:51.956 ************************************ 00:15:51.956 00:41:09 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:15:51.956 00:41:09 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:51.956 00:41:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:51.956 00:41:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:51.956 00:41:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.956 ************************************ 00:15:51.956 START TEST nvmf_host_management 00:15:51.956 ************************************ 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:51.956 * Looking for test storage... 00:15:51.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:51.956 00:41:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:58.542 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:58.542 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:58.542 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:58.542 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:58.542 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:58.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:15:58.543 00:15:58.543 --- 10.0.0.2 ping statistics --- 00:15:58.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.543 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:58.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:15:58.543 00:15:58.543 --- 10.0.0.1 ping statistics --- 00:15:58.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.543 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:58.543 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=355075 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 355075 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 355075 ']' 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:58.805 00:41:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:58.805 [2024-06-08 00:41:16.885948] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:58.805 [2024-06-08 00:41:16.886000] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.805 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.805 [2024-06-08 00:41:16.951492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.805 [2024-06-08 00:41:17.046638] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.805 [2024-06-08 00:41:17.046700] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.805 [2024-06-08 00:41:17.046708] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.805 [2024-06-08 00:41:17.046714] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.805 [2024-06-08 00:41:17.046720] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.805 [2024-06-08 00:41:17.046862] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.805 [2024-06-08 00:41:17.047036] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.805 [2024-06-08 00:41:17.047206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:58.805 [2024-06-08 00:41:17.047210] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:59.746 [2024-06-08 00:41:17.795251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:59.746 Malloc0 00:15:59.746 [2024-06-08 00:41:17.858524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=355208 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 355208 /var/tmp/bdevperf.sock 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 355208 ']' 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:59.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:59.746 { 00:15:59.746 "params": { 00:15:59.746 "name": "Nvme$subsystem", 00:15:59.746 "trtype": "$TEST_TRANSPORT", 00:15:59.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.746 "adrfam": "ipv4", 00:15:59.746 "trsvcid": "$NVMF_PORT", 00:15:59.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.746 "hdgst": ${hdgst:-false}, 00:15:59.746 "ddgst": ${ddgst:-false} 00:15:59.746 }, 00:15:59.746 "method": "bdev_nvme_attach_controller" 00:15:59.746 } 00:15:59.746 EOF 00:15:59.746 )") 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:59.746 00:41:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:59.746 "params": { 00:15:59.746 "name": "Nvme0", 00:15:59.746 "trtype": "tcp", 00:15:59.746 "traddr": "10.0.0.2", 00:15:59.746 "adrfam": "ipv4", 00:15:59.746 "trsvcid": "4420", 00:15:59.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:59.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:59.746 "hdgst": false, 00:15:59.746 "ddgst": false 00:15:59.746 }, 00:15:59.746 "method": "bdev_nvme_attach_controller" 00:15:59.746 }' 00:15:59.746 [2024-06-08 00:41:17.968750] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:59.746 [2024-06-08 00:41:17.968814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355208 ] 00:15:59.746 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.746 [2024-06-08 00:41:18.027871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.007 [2024-06-08 00:41:18.093361] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.007 Running I/O for 10 seconds... 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:00.579 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.580 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.580 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.580 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:16:00.580 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:16:00.580 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:00.580 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:00.580 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:00.580 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:00.580 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.580 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.580 [2024-06-08 00:41:18.793572] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256fb30 is same with the state(5) to be set 00:16:00.580 [2024-06-08 00:41:18.793616] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x256fb30 is same with the state(5) to be set 00:16:00.580 [2024-06-08 00:41:18.794414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.794984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.794993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.795000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.795009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.580 [2024-06-08 00:41:18.795016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.580 [2024-06-08 00:41:18.795025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.581 [2024-06-08 00:41:18.795497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.795549] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12703c0 was disconnected and freed. reset controller. 00:16:00.581 [2024-06-08 00:41:18.796749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:00.581 task offset: 76032 on job bdev=Nvme0n1 fails 00:16:00.581 00:16:00.581 Latency(us) 00:16:00.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.581 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:00.581 Job: Nvme0n1 ended in about 0.51 seconds with error 00:16:00.581 Verification LBA range: start 0x0 length 0x400 00:16:00.581 Nvme0n1 : 0.51 1124.37 70.27 124.93 0.00 49968.61 1733.97 44564.48 00:16:00.581 =================================================================================================================== 00:16:00.581 Total : 1124.37 70.27 124.93 0.00 49968.61 1733.97 44564.48 00:16:00.581 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.581 [2024-06-08 00:41:18.798745] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:00.581 [2024-06-08 00:41:18.798766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37030 (9): Bad file descriptor 00:16:00.581 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:00.581 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.581 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.581 [2024-06-08 00:41:18.801207] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:16:00.581 [2024-06-08 00:41:18.801313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:00.581 [2024-06-08 00:41:18.801344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.581 [2024-06-08 00:41:18.801360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:16:00.581 [2024-06-08 00:41:18.801368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:16:00.581 [2024-06-08 00:41:18.801376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:16:00.581 [2024-06-08 00:41:18.801382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe37030 00:16:00.581 [2024-06-08 00:41:18.801408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37030 (9): Bad file descriptor 00:16:00.581 [2024-06-08 00:41:18.801422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:00.582 [2024-06-08 00:41:18.801429] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:00.582 [2024-06-08 00:41:18.801437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:00.582 [2024-06-08 00:41:18.801450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:00.582 00:41:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.582 00:41:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 355208 00:16:01.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (355208) - No such process 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:01.967 { 00:16:01.967 "params": { 00:16:01.967 "name": "Nvme$subsystem", 00:16:01.967 "trtype": "$TEST_TRANSPORT", 00:16:01.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:01.967 "adrfam": "ipv4", 00:16:01.967 "trsvcid": "$NVMF_PORT", 00:16:01.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:01.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:01.967 "hdgst": ${hdgst:-false}, 00:16:01.967 "ddgst": ${ddgst:-false} 00:16:01.967 }, 00:16:01.967 "method": "bdev_nvme_attach_controller" 00:16:01.967 } 00:16:01.967 EOF 00:16:01.967 )") 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:01.967 00:41:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:01.967 "params": { 00:16:01.967 "name": "Nvme0", 00:16:01.967 "trtype": "tcp", 00:16:01.967 "traddr": "10.0.0.2", 00:16:01.967 "adrfam": "ipv4", 00:16:01.967 "trsvcid": "4420", 00:16:01.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:01.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:01.967 "hdgst": false, 00:16:01.967 "ddgst": false 00:16:01.967 }, 00:16:01.967 "method": "bdev_nvme_attach_controller" 00:16:01.967 }' 00:16:01.967 [2024-06-08 00:41:19.863464] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:01.967 [2024-06-08 00:41:19.863518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355580 ] 00:16:01.967 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.967 [2024-06-08 00:41:19.922352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.967 [2024-06-08 00:41:19.985043] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.967 Running I/O for 1 seconds... 00:16:02.908 00:16:02.908 Latency(us) 00:16:02.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.908 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:02.908 Verification LBA range: start 0x0 length 0x400 00:16:02.908 Nvme0n1 : 1.03 1238.04 77.38 0.00 0.00 50911.49 7973.55 44564.48 00:16:02.908 =================================================================================================================== 00:16:02.908 Total : 1238.04 77.38 0.00 0.00 50911.49 7973.55 44564.48 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.169 rmmod nvme_tcp 00:16:03.169 rmmod nvme_fabrics 00:16:03.169 rmmod nvme_keyring 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 355075 ']' 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 355075 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 355075 ']' 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 355075 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 355075 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 355075' 00:16:03.169 killing process with pid 355075 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 355075 00:16:03.169 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 355075 00:16:03.430 [2024-06-08 00:41:21.534477] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:03.430 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.430 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:03.430 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:03.430 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.430 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.430 00:41:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.430 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.430 00:41:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.346 00:41:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:05.346 00:41:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:05.346 00:16:05.346 real 0m13.835s 00:16:05.346 user 0m22.414s 00:16:05.346 sys 0m6.109s 00:16:05.608 00:41:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:05.608 00:41:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:05.608 ************************************ 00:16:05.608 END TEST nvmf_host_management 00:16:05.608 ************************************ 00:16:05.608 00:41:23 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:05.608 00:41:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:05.608 00:41:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:05.608 00:41:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:05.608 ************************************ 00:16:05.608 START TEST nvmf_lvol 00:16:05.608 ************************************ 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:05.609 * Looking for test storage... 00:16:05.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:05.609 00:41:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:12.239 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:12.239 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:12.239 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.239 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:12.240 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:12.240 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:12.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:16:12.504 00:16:12.504 --- 10.0.0.2 ping statistics --- 00:16:12.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.504 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:12.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:16:12.504 00:16:12.504 --- 10.0.0.1 ping statistics --- 00:16:12.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.504 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:12.504 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=360140 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 360140 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 360140 ']' 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:12.765 00:41:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:12.765 [2024-06-08 00:41:30.888068] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:12.765 [2024-06-08 00:41:30.888133] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.765 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.765 [2024-06-08 00:41:30.960807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:12.765 [2024-06-08 00:41:31.034597] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.765 [2024-06-08 00:41:31.034634] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.765 [2024-06-08 00:41:31.034642] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.765 [2024-06-08 00:41:31.034648] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.765 [2024-06-08 00:41:31.034654] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.765 [2024-06-08 00:41:31.034795] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.765 [2024-06-08 00:41:31.034918] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.765 [2024-06-08 00:41:31.034921] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.703 00:41:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:13.703 00:41:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:16:13.703 00:41:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:13.703 00:41:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:13.703 00:41:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:13.703 00:41:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.703 00:41:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:13.704 [2024-06-08 00:41:31.851007] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.704 00:41:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.964 00:41:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:13.964 00:41:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.964 00:41:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:13.964 00:41:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:14.223 00:41:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:14.484 00:41:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bdb41088-6c07-4580-80dd-de7c184520e8 00:16:14.484 00:41:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bdb41088-6c07-4580-80dd-de7c184520e8 lvol 20 00:16:14.484 00:41:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7011aad4-aaba-4abe-8bf6-e03fd063fa8d 00:16:14.484 00:41:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:14.744 00:41:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7011aad4-aaba-4abe-8bf6-e03fd063fa8d 00:16:15.004 00:41:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:15.004 [2024-06-08 00:41:33.187153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.005 00:41:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.265 00:41:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=360582 00:16:15.265 00:41:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:15.265 00:41:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:15.265 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.205 00:41:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7011aad4-aaba-4abe-8bf6-e03fd063fa8d MY_SNAPSHOT 00:16:16.466 00:41:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d02bca90-24be-455e-ba8e-ca8d6413f1f0 00:16:16.466 00:41:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7011aad4-aaba-4abe-8bf6-e03fd063fa8d 30 00:16:16.726 00:41:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d02bca90-24be-455e-ba8e-ca8d6413f1f0 MY_CLONE 00:16:16.726 00:41:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=62300fd8-8b6b-4ad0-9082-db9364e97eac 00:16:16.726 00:41:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 62300fd8-8b6b-4ad0-9082-db9364e97eac 00:16:17.296 00:41:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 360582 00:16:27.294 Initializing NVMe Controllers 00:16:27.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:27.294 Controller IO queue size 128, less than required. 00:16:27.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:27.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:27.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:27.294 Initialization complete. Launching workers. 00:16:27.294 ======================================================== 00:16:27.294 Latency(us) 00:16:27.294 Device Information : IOPS MiB/s Average min max 00:16:27.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12500.90 48.83 10241.06 1476.42 44677.09 00:16:27.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17237.80 67.34 7427.09 3742.60 80306.24 00:16:27.294 ======================================================== 00:16:27.294 Total : 29738.69 116.17 8609.96 1476.42 80306.24 00:16:27.294 00:16:27.294 00:41:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:27.294 00:41:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7011aad4-aaba-4abe-8bf6-e03fd063fa8d 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bdb41088-6c07-4580-80dd-de7c184520e8 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:27.294 rmmod nvme_tcp 00:16:27.294 rmmod nvme_fabrics 00:16:27.294 rmmod nvme_keyring 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 360140 ']' 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 360140 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 360140 ']' 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 360140 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 360140 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 360140' 00:16:27.294 killing process with pid 360140 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 360140 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 360140 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.294 00:41:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:28.680 00:16:28.680 real 0m22.957s 00:16:28.680 user 1m3.590s 00:16:28.680 sys 0m7.679s 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:28.680 ************************************ 00:16:28.680 END TEST nvmf_lvol 00:16:28.680 ************************************ 00:16:28.680 00:41:46 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:28.680 00:41:46 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:28.680 00:41:46 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:28.680 00:41:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.680 ************************************ 00:16:28.680 START TEST nvmf_lvs_grow 00:16:28.680 ************************************ 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:28.680 * Looking for test storage... 00:16:28.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.680 00:41:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.681 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:35.269 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:35.269 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:35.269 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:35.270 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:35.270 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.270 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:35.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:16:35.531 00:16:35.531 --- 10.0.0.2 ping statistics --- 00:16:35.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.531 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:16:35.531 00:16:35.531 --- 10.0.0.1 ping statistics --- 00:16:35.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.531 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:35.531 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=366910 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 366910 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 366910 ']' 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:35.792 00:41:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:35.792 [2024-06-08 00:41:53.901141] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:35.792 [2024-06-08 00:41:53.901201] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.792 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.792 [2024-06-08 00:41:53.970336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.792 [2024-06-08 00:41:54.043562] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.792 [2024-06-08 00:41:54.043599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.792 [2024-06-08 00:41:54.043607] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.792 [2024-06-08 00:41:54.043613] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.792 [2024-06-08 00:41:54.043619] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.792 [2024-06-08 00:41:54.043636] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.734 [2024-06-08 00:41:54.846580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:36.734 ************************************ 00:16:36.734 START TEST lvs_grow_clean 00:16:36.734 ************************************ 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:36.734 00:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:36.995 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:36.995 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:37.255 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=87801ba4-b47d-4a86-81d5-6099559c7371 00:16:37.256 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:37.256 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:37.256 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:37.256 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:37.256 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 87801ba4-b47d-4a86-81d5-6099559c7371 lvol 150 00:16:37.533 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e739ec70-a813-4da9-9734-c2b787e4c4cf 00:16:37.533 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:37.533 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:37.533 [2024-06-08 00:41:55.743509] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:37.533 [2024-06-08 00:41:55.743559] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:37.533 true 00:16:37.533 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:37.533 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:37.803 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:37.803 00:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:37.803 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e739ec70-a813-4da9-9734-c2b787e4c4cf 00:16:38.064 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:38.064 [2024-06-08 00:41:56.321264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.064 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:38.326 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=367522 00:16:38.326 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:38.326 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:38.326 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 367522 /var/tmp/bdevperf.sock 00:16:38.326 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 367522 ']' 00:16:38.326 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:38.326 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:38.326 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:38.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:38.326 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:38.326 00:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:38.326 [2024-06-08 00:41:56.534039] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:38.326 [2024-06-08 00:41:56.534090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367522 ] 00:16:38.326 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.586 [2024-06-08 00:41:56.610066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.586 [2024-06-08 00:41:56.674238] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.157 00:41:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:39.157 00:41:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:16:39.157 00:41:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:39.417 Nvme0n1 00:16:39.417 00:41:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:39.679 [ 00:16:39.679 { 00:16:39.679 "name": "Nvme0n1", 00:16:39.679 "aliases": [ 00:16:39.679 "e739ec70-a813-4da9-9734-c2b787e4c4cf" 00:16:39.679 ], 00:16:39.679 "product_name": "NVMe disk", 00:16:39.679 "block_size": 4096, 00:16:39.679 "num_blocks": 38912, 00:16:39.679 "uuid": "e739ec70-a813-4da9-9734-c2b787e4c4cf", 00:16:39.679 "assigned_rate_limits": { 00:16:39.679 "rw_ios_per_sec": 0, 00:16:39.679 "rw_mbytes_per_sec": 0, 00:16:39.679 "r_mbytes_per_sec": 0, 00:16:39.679 "w_mbytes_per_sec": 0 00:16:39.679 }, 00:16:39.679 "claimed": false, 00:16:39.679 "zoned": false, 00:16:39.679 "supported_io_types": { 00:16:39.679 "read": true, 00:16:39.679 "write": true, 00:16:39.679 "unmap": true, 00:16:39.679 "write_zeroes": true, 00:16:39.679 "flush": true, 00:16:39.679 "reset": true, 00:16:39.679 "compare": true, 00:16:39.679 "compare_and_write": true, 00:16:39.679 "abort": true, 00:16:39.679 "nvme_admin": true, 00:16:39.679 "nvme_io": true 00:16:39.679 }, 00:16:39.679 "memory_domains": [ 00:16:39.679 { 00:16:39.679 "dma_device_id": "system", 00:16:39.679 "dma_device_type": 1 00:16:39.679 } 00:16:39.679 ], 00:16:39.679 "driver_specific": { 00:16:39.679 "nvme": [ 00:16:39.679 { 00:16:39.679 "trid": { 00:16:39.679 "trtype": "TCP", 00:16:39.679 "adrfam": "IPv4", 00:16:39.679 "traddr": "10.0.0.2", 00:16:39.679 "trsvcid": "4420", 00:16:39.679 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:39.679 }, 00:16:39.679 "ctrlr_data": { 00:16:39.679 "cntlid": 1, 00:16:39.679 "vendor_id": "0x8086", 00:16:39.679 "model_number": "SPDK bdev Controller", 00:16:39.679 "serial_number": "SPDK0", 00:16:39.679 "firmware_revision": "24.09", 00:16:39.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:39.679 "oacs": { 00:16:39.679 "security": 0, 00:16:39.679 "format": 0, 00:16:39.679 "firmware": 0, 00:16:39.679 "ns_manage": 0 00:16:39.679 }, 00:16:39.679 "multi_ctrlr": true, 00:16:39.679 "ana_reporting": false 00:16:39.679 }, 00:16:39.679 "vs": { 00:16:39.679 "nvme_version": "1.3" 00:16:39.679 }, 00:16:39.679 "ns_data": { 00:16:39.679 "id": 1, 00:16:39.679 "can_share": true 00:16:39.679 } 00:16:39.679 } 00:16:39.679 ], 00:16:39.679 "mp_policy": "active_passive" 00:16:39.679 } 00:16:39.679 } 00:16:39.679 ] 00:16:39.679 00:41:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=367670 00:16:39.679 00:41:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:39.679 00:41:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:39.679 Running I/O for 10 seconds... 00:16:40.621 Latency(us) 00:16:40.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.621 Nvme0n1 : 1.00 17388.00 67.92 0.00 0.00 0.00 0.00 0.00 00:16:40.621 =================================================================================================================== 00:16:40.621 Total : 17388.00 67.92 0.00 0.00 0.00 0.00 0.00 00:16:40.621 00:16:41.562 00:41:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:41.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.822 Nvme0n1 : 2.00 17466.00 68.23 0.00 0.00 0.00 0.00 0.00 00:16:41.822 =================================================================================================================== 00:16:41.822 Total : 17466.00 68.23 0.00 0.00 0.00 0.00 0.00 00:16:41.822 00:16:41.822 true 00:16:41.822 00:41:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:41.822 00:41:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:42.082 00:42:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:42.082 00:42:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:42.083 00:42:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 367670 00:16:42.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.654 Nvme0n1 : 3.00 17497.33 68.35 0.00 0.00 0.00 0.00 0.00 00:16:42.654 =================================================================================================================== 00:16:42.654 Total : 17497.33 68.35 0.00 0.00 0.00 0.00 0.00 00:16:42.654 00:16:44.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.034 Nvme0n1 : 4.00 17519.00 68.43 0.00 0.00 0.00 0.00 0.00 00:16:44.034 =================================================================================================================== 00:16:44.034 Total : 17519.00 68.43 0.00 0.00 0.00 0.00 0.00 00:16:44.034 00:16:44.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.974 Nvme0n1 : 5.00 17543.20 68.53 0.00 0.00 0.00 0.00 0.00 00:16:44.974 =================================================================================================================== 00:16:44.974 Total : 17543.20 68.53 0.00 0.00 0.00 0.00 0.00 00:16:44.974 00:16:45.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.916 Nvme0n1 : 6.00 17562.00 68.60 0.00 0.00 0.00 0.00 0.00 00:16:45.916 =================================================================================================================== 00:16:45.916 Total : 17562.00 68.60 0.00 0.00 0.00 0.00 0.00 00:16:45.916 00:16:46.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.859 Nvme0n1 : 7.00 17578.86 68.67 0.00 0.00 0.00 0.00 0.00 00:16:46.859 =================================================================================================================== 00:16:46.859 Total : 17578.86 68.67 0.00 0.00 0.00 0.00 0.00 00:16:46.859 00:16:47.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.802 Nvme0n1 : 8.00 17593.50 68.72 0.00 0.00 0.00 0.00 0.00 00:16:47.802 =================================================================================================================== 00:16:47.802 Total : 17593.50 68.72 0.00 0.00 0.00 0.00 0.00 00:16:47.802 00:16:48.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.744 Nvme0n1 : 9.00 17604.00 68.77 0.00 0.00 0.00 0.00 0.00 00:16:48.744 =================================================================================================================== 00:16:48.744 Total : 17604.00 68.77 0.00 0.00 0.00 0.00 0.00 00:16:48.744 00:16:49.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.686 Nvme0n1 : 10.00 17615.60 68.81 0.00 0.00 0.00 0.00 0.00 00:16:49.686 =================================================================================================================== 00:16:49.686 Total : 17615.60 68.81 0.00 0.00 0.00 0.00 0.00 00:16:49.686 00:16:49.686 00:16:49.686 Latency(us) 00:16:49.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.686 Nvme0n1 : 10.01 17616.18 68.81 0.00 0.00 7260.83 4369.07 11741.87 00:16:49.686 =================================================================================================================== 00:16:49.686 Total : 17616.18 68.81 0.00 0.00 7260.83 4369.07 11741.87 00:16:49.686 0 00:16:49.686 00:42:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 367522 00:16:49.686 00:42:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 367522 ']' 00:16:49.686 00:42:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 367522 00:16:49.686 00:42:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:16:49.686 00:42:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:49.686 00:42:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 367522 00:16:49.947 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:49.947 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:49.947 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 367522' 00:16:49.947 killing process with pid 367522 00:16:49.947 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 367522 00:16:49.947 Received shutdown signal, test time was about 10.000000 seconds 00:16:49.947 00:16:49.947 Latency(us) 00:16:49.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.947 =================================================================================================================== 00:16:49.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.947 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 367522 00:16:49.947 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:50.208 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:50.208 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:50.208 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:50.470 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:50.470 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:50.470 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:50.731 [2024-06-08 00:42:08.777550] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:50.731 request: 00:16:50.731 { 00:16:50.731 "uuid": "87801ba4-b47d-4a86-81d5-6099559c7371", 00:16:50.731 "method": "bdev_lvol_get_lvstores", 00:16:50.731 "req_id": 1 00:16:50.731 } 00:16:50.731 Got JSON-RPC error response 00:16:50.731 response: 00:16:50.731 { 00:16:50.731 "code": -19, 00:16:50.731 "message": "No such device" 00:16:50.731 } 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:50.731 00:42:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:50.992 aio_bdev 00:16:50.992 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e739ec70-a813-4da9-9734-c2b787e4c4cf 00:16:50.992 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=e739ec70-a813-4da9-9734-c2b787e4c4cf 00:16:50.992 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:50.992 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:16:50.992 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:50.992 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:50.992 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:51.253 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e739ec70-a813-4da9-9734-c2b787e4c4cf -t 2000 00:16:51.253 [ 00:16:51.253 { 00:16:51.253 "name": "e739ec70-a813-4da9-9734-c2b787e4c4cf", 00:16:51.253 "aliases": [ 00:16:51.253 "lvs/lvol" 00:16:51.253 ], 00:16:51.253 "product_name": "Logical Volume", 00:16:51.253 "block_size": 4096, 00:16:51.253 "num_blocks": 38912, 00:16:51.253 "uuid": "e739ec70-a813-4da9-9734-c2b787e4c4cf", 00:16:51.253 "assigned_rate_limits": { 00:16:51.253 "rw_ios_per_sec": 0, 00:16:51.253 "rw_mbytes_per_sec": 0, 00:16:51.253 "r_mbytes_per_sec": 0, 00:16:51.253 "w_mbytes_per_sec": 0 00:16:51.253 }, 00:16:51.253 "claimed": false, 00:16:51.253 "zoned": false, 00:16:51.253 "supported_io_types": { 00:16:51.253 "read": true, 00:16:51.253 "write": true, 00:16:51.253 "unmap": true, 00:16:51.253 "write_zeroes": true, 00:16:51.253 "flush": false, 00:16:51.253 "reset": true, 00:16:51.253 "compare": false, 00:16:51.253 "compare_and_write": false, 00:16:51.253 "abort": false, 00:16:51.253 "nvme_admin": false, 00:16:51.253 "nvme_io": false 00:16:51.253 }, 00:16:51.253 "driver_specific": { 00:16:51.253 "lvol": { 00:16:51.253 "lvol_store_uuid": "87801ba4-b47d-4a86-81d5-6099559c7371", 00:16:51.253 "base_bdev": "aio_bdev", 00:16:51.253 "thin_provision": false, 00:16:51.253 "num_allocated_clusters": 38, 00:16:51.253 "snapshot": false, 00:16:51.253 "clone": false, 00:16:51.253 "esnap_clone": false 00:16:51.253 } 00:16:51.253 } 00:16:51.253 } 00:16:51.253 ] 00:16:51.253 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:16:51.253 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:51.253 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:51.514 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:51.514 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:51.514 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:51.514 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:51.514 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e739ec70-a813-4da9-9734-c2b787e4c4cf 00:16:51.775 00:42:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 87801ba4-b47d-4a86-81d5-6099559c7371 00:16:52.036 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:52.036 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:52.036 00:16:52.036 real 0m15.350s 00:16:52.036 user 0m15.049s 00:16:52.036 sys 0m1.306s 00:16:52.036 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:52.036 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:52.036 ************************************ 00:16:52.036 END TEST lvs_grow_clean 00:16:52.036 ************************************ 00:16:52.036 00:42:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:52.036 00:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:52.036 00:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:52.036 00:42:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:52.297 ************************************ 00:16:52.297 START TEST lvs_grow_dirty 00:16:52.297 ************************************ 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:52.297 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:52.558 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:16:52.558 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:16:52.558 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:52.818 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:52.818 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:52.818 00:42:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 lvol 150 00:16:52.818 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=00ec9e51-a0b5-49ef-9575-db126ff3ae64 00:16:52.818 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:52.818 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:53.082 [2024-06-08 00:42:11.140418] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:53.082 [2024-06-08 00:42:11.140472] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:53.082 true 00:16:53.082 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:16:53.082 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:53.082 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:53.082 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:53.361 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 00ec9e51-a0b5-49ef-9575-db126ff3ae64 00:16:53.361 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:53.621 [2024-06-08 00:42:11.718152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=371104 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 371104 /var/tmp/bdevperf.sock 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 371104 ']' 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:53.621 00:42:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:53.882 [2024-06-08 00:42:11.915159] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:53.882 [2024-06-08 00:42:11.915206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid371104 ] 00:16:53.882 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.882 [2024-06-08 00:42:11.990477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.882 [2024-06-08 00:42:12.044173] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.455 00:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:54.455 00:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:16:54.455 00:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:54.715 Nvme0n1 00:16:54.715 00:42:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:54.975 [ 00:16:54.975 { 00:16:54.975 "name": "Nvme0n1", 00:16:54.975 "aliases": [ 00:16:54.975 "00ec9e51-a0b5-49ef-9575-db126ff3ae64" 00:16:54.975 ], 00:16:54.975 "product_name": "NVMe disk", 00:16:54.975 "block_size": 4096, 00:16:54.975 "num_blocks": 38912, 00:16:54.975 "uuid": "00ec9e51-a0b5-49ef-9575-db126ff3ae64", 00:16:54.975 "assigned_rate_limits": { 00:16:54.975 "rw_ios_per_sec": 0, 00:16:54.975 "rw_mbytes_per_sec": 0, 00:16:54.975 "r_mbytes_per_sec": 0, 00:16:54.975 "w_mbytes_per_sec": 0 00:16:54.975 }, 00:16:54.975 "claimed": false, 00:16:54.975 "zoned": false, 00:16:54.975 "supported_io_types": { 00:16:54.975 "read": true, 00:16:54.975 "write": true, 00:16:54.975 "unmap": true, 00:16:54.975 "write_zeroes": true, 00:16:54.975 "flush": true, 00:16:54.975 "reset": true, 00:16:54.975 "compare": true, 00:16:54.975 "compare_and_write": true, 00:16:54.975 "abort": true, 00:16:54.975 "nvme_admin": true, 00:16:54.975 "nvme_io": true 00:16:54.975 }, 00:16:54.975 "memory_domains": [ 00:16:54.975 { 00:16:54.975 "dma_device_id": "system", 00:16:54.975 "dma_device_type": 1 00:16:54.975 } 00:16:54.975 ], 00:16:54.975 "driver_specific": { 00:16:54.975 "nvme": [ 00:16:54.975 { 00:16:54.975 "trid": { 00:16:54.975 "trtype": "TCP", 00:16:54.975 "adrfam": "IPv4", 00:16:54.975 "traddr": "10.0.0.2", 00:16:54.975 "trsvcid": "4420", 00:16:54.975 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:54.975 }, 00:16:54.975 "ctrlr_data": { 00:16:54.975 "cntlid": 1, 00:16:54.975 "vendor_id": "0x8086", 00:16:54.975 "model_number": "SPDK bdev Controller", 00:16:54.975 "serial_number": "SPDK0", 00:16:54.975 "firmware_revision": "24.09", 00:16:54.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:54.975 "oacs": { 00:16:54.975 "security": 0, 00:16:54.975 "format": 0, 00:16:54.975 "firmware": 0, 00:16:54.975 "ns_manage": 0 00:16:54.975 }, 00:16:54.975 "multi_ctrlr": true, 00:16:54.975 "ana_reporting": false 00:16:54.975 }, 00:16:54.975 "vs": { 00:16:54.975 "nvme_version": "1.3" 00:16:54.975 }, 00:16:54.975 "ns_data": { 00:16:54.975 "id": 1, 00:16:54.975 "can_share": true 00:16:54.975 } 00:16:54.975 } 00:16:54.975 ], 00:16:54.975 "mp_policy": "active_passive" 00:16:54.975 } 00:16:54.975 } 00:16:54.975 ] 00:16:54.975 00:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=371288 00:16:54.975 00:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:54.975 00:42:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:54.975 Running I/O for 10 seconds... 00:16:55.916 Latency(us) 00:16:55.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.916 Nvme0n1 : 1.00 17388.00 67.92 0.00 0.00 0.00 0.00 0.00 00:16:55.916 =================================================================================================================== 00:16:55.916 Total : 17388.00 67.92 0.00 0.00 0.00 0.00 0.00 00:16:55.916 00:16:56.858 00:42:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:16:57.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.117 Nvme0n1 : 2.00 17470.00 68.24 0.00 0.00 0.00 0.00 0.00 00:16:57.117 =================================================================================================================== 00:16:57.117 Total : 17470.00 68.24 0.00 0.00 0.00 0.00 0.00 00:16:57.117 00:16:57.117 true 00:16:57.117 00:42:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:57.117 00:42:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:16:57.378 00:42:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:57.378 00:42:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:57.378 00:42:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 371288 00:16:57.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.949 Nvme0n1 : 3.00 17505.33 68.38 0.00 0.00 0.00 0.00 0.00 00:16:57.949 =================================================================================================================== 00:16:57.949 Total : 17505.33 68.38 0.00 0.00 0.00 0.00 0.00 00:16:57.949 00:16:59.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.333 Nvme0n1 : 4.00 17533.00 68.49 0.00 0.00 0.00 0.00 0.00 00:16:59.333 =================================================================================================================== 00:16:59.333 Total : 17533.00 68.49 0.00 0.00 0.00 0.00 0.00 00:16:59.333 00:17:00.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.275 Nvme0n1 : 5.00 17557.60 68.58 0.00 0.00 0.00 0.00 0.00 00:17:00.275 =================================================================================================================== 00:17:00.275 Total : 17557.60 68.58 0.00 0.00 0.00 0.00 0.00 00:17:00.275 00:17:01.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.217 Nvme0n1 : 6.00 17575.33 68.65 0.00 0.00 0.00 0.00 0.00 00:17:01.217 =================================================================================================================== 00:17:01.217 Total : 17575.33 68.65 0.00 0.00 0.00 0.00 0.00 00:17:01.217 00:17:02.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.160 Nvme0n1 : 7.00 17589.14 68.71 0.00 0.00 0.00 0.00 0.00 00:17:02.160 =================================================================================================================== 00:17:02.160 Total : 17589.14 68.71 0.00 0.00 0.00 0.00 0.00 00:17:02.160 00:17:03.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.102 Nvme0n1 : 8.00 17601.50 68.76 0.00 0.00 0.00 0.00 0.00 00:17:03.102 =================================================================================================================== 00:17:03.102 Total : 17601.50 68.76 0.00 0.00 0.00 0.00 0.00 00:17:03.102 00:17:04.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.044 Nvme0n1 : 9.00 17612.89 68.80 0.00 0.00 0.00 0.00 0.00 00:17:04.044 =================================================================================================================== 00:17:04.044 Total : 17612.89 68.80 0.00 0.00 0.00 0.00 0.00 00:17:04.044 00:17:04.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.986 Nvme0n1 : 10.00 17623.60 68.84 0.00 0.00 0.00 0.00 0.00 00:17:04.986 =================================================================================================================== 00:17:04.986 Total : 17623.60 68.84 0.00 0.00 0.00 0.00 0.00 00:17:04.986 00:17:04.986 00:17:04.986 Latency(us) 00:17:04.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.986 Nvme0n1 : 10.01 17623.89 68.84 0.00 0.00 7257.78 3604.48 11195.73 00:17:04.986 =================================================================================================================== 00:17:04.986 Total : 17623.89 68.84 0.00 0.00 7257.78 3604.48 11195.73 00:17:04.986 0 00:17:04.986 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 371104 00:17:04.986 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 371104 ']' 00:17:04.986 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 371104 00:17:04.986 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:17:04.986 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:04.986 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 371104 00:17:05.246 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:05.246 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:05.246 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 371104' 00:17:05.246 killing process with pid 371104 00:17:05.246 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 371104 00:17:05.246 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.246 00:17:05.246 Latency(us) 00:17:05.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.246 =================================================================================================================== 00:17:05.246 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.246 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 371104 00:17:05.246 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:05.506 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:05.506 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:17:05.506 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 366910 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 366910 00:17:05.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 366910 Killed "${NVMF_APP[@]}" "$@" 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=373435 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 373435 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 373435 ']' 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:05.767 00:42:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 [2024-06-08 00:42:23.963333] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:05.767 [2024-06-08 00:42:23.963383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.767 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.767 [2024-06-08 00:42:24.027848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.027 [2024-06-08 00:42:24.091867] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.028 [2024-06-08 00:42:24.091903] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.028 [2024-06-08 00:42:24.091910] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.028 [2024-06-08 00:42:24.091916] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.028 [2024-06-08 00:42:24.091922] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.028 [2024-06-08 00:42:24.091939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.598 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:06.598 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:17:06.598 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.598 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:06.598 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:06.598 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.598 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:06.858 [2024-06-08 00:42:24.908697] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:06.858 [2024-06-08 00:42:24.908791] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:06.858 [2024-06-08 00:42:24.908820] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:06.858 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:06.858 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 00ec9e51-a0b5-49ef-9575-db126ff3ae64 00:17:06.858 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=00ec9e51-a0b5-49ef-9575-db126ff3ae64 00:17:06.858 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:06.858 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:17:06.858 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:06.858 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:06.858 00:42:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:06.858 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 00ec9e51-a0b5-49ef-9575-db126ff3ae64 -t 2000 00:17:07.118 [ 00:17:07.118 { 00:17:07.118 "name": "00ec9e51-a0b5-49ef-9575-db126ff3ae64", 00:17:07.118 "aliases": [ 00:17:07.118 "lvs/lvol" 00:17:07.118 ], 00:17:07.119 "product_name": "Logical Volume", 00:17:07.119 "block_size": 4096, 00:17:07.119 "num_blocks": 38912, 00:17:07.119 "uuid": "00ec9e51-a0b5-49ef-9575-db126ff3ae64", 00:17:07.119 "assigned_rate_limits": { 00:17:07.119 "rw_ios_per_sec": 0, 00:17:07.119 "rw_mbytes_per_sec": 0, 00:17:07.119 "r_mbytes_per_sec": 0, 00:17:07.119 "w_mbytes_per_sec": 0 00:17:07.119 }, 00:17:07.119 "claimed": false, 00:17:07.119 "zoned": false, 00:17:07.119 "supported_io_types": { 00:17:07.119 "read": true, 00:17:07.119 "write": true, 00:17:07.119 "unmap": true, 00:17:07.119 "write_zeroes": true, 00:17:07.119 "flush": false, 00:17:07.119 "reset": true, 00:17:07.119 "compare": false, 00:17:07.119 "compare_and_write": false, 00:17:07.119 "abort": false, 00:17:07.119 "nvme_admin": false, 00:17:07.119 "nvme_io": false 00:17:07.119 }, 00:17:07.119 "driver_specific": { 00:17:07.119 "lvol": { 00:17:07.119 "lvol_store_uuid": "4cc4ae03-3d11-45ad-9585-b0df15c76a12", 00:17:07.119 "base_bdev": "aio_bdev", 00:17:07.119 "thin_provision": false, 00:17:07.119 "num_allocated_clusters": 38, 00:17:07.119 "snapshot": false, 00:17:07.119 "clone": false, 00:17:07.119 "esnap_clone": false 00:17:07.119 } 00:17:07.119 } 00:17:07.119 } 00:17:07.119 ] 00:17:07.119 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:17:07.119 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:17:07.119 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:07.119 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:07.119 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:17:07.119 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:07.379 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:07.379 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:07.640 [2024-06-08 00:42:25.692671] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:17:07.640 request: 00:17:07.640 { 00:17:07.640 "uuid": "4cc4ae03-3d11-45ad-9585-b0df15c76a12", 00:17:07.640 "method": "bdev_lvol_get_lvstores", 00:17:07.640 "req_id": 1 00:17:07.640 } 00:17:07.640 Got JSON-RPC error response 00:17:07.640 response: 00:17:07.640 { 00:17:07.640 "code": -19, 00:17:07.640 "message": "No such device" 00:17:07.640 } 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:07.640 00:42:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:07.901 aio_bdev 00:17:07.901 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 00ec9e51-a0b5-49ef-9575-db126ff3ae64 00:17:07.901 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=00ec9e51-a0b5-49ef-9575-db126ff3ae64 00:17:07.901 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:07.901 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:17:07.901 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:07.901 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:07.901 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:08.161 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 00ec9e51-a0b5-49ef-9575-db126ff3ae64 -t 2000 00:17:08.161 [ 00:17:08.161 { 00:17:08.161 "name": "00ec9e51-a0b5-49ef-9575-db126ff3ae64", 00:17:08.162 "aliases": [ 00:17:08.162 "lvs/lvol" 00:17:08.162 ], 00:17:08.162 "product_name": "Logical Volume", 00:17:08.162 "block_size": 4096, 00:17:08.162 "num_blocks": 38912, 00:17:08.162 "uuid": "00ec9e51-a0b5-49ef-9575-db126ff3ae64", 00:17:08.162 "assigned_rate_limits": { 00:17:08.162 "rw_ios_per_sec": 0, 00:17:08.162 "rw_mbytes_per_sec": 0, 00:17:08.162 "r_mbytes_per_sec": 0, 00:17:08.162 "w_mbytes_per_sec": 0 00:17:08.162 }, 00:17:08.162 "claimed": false, 00:17:08.162 "zoned": false, 00:17:08.162 "supported_io_types": { 00:17:08.162 "read": true, 00:17:08.162 "write": true, 00:17:08.162 "unmap": true, 00:17:08.162 "write_zeroes": true, 00:17:08.162 "flush": false, 00:17:08.162 "reset": true, 00:17:08.162 "compare": false, 00:17:08.162 "compare_and_write": false, 00:17:08.162 "abort": false, 00:17:08.162 "nvme_admin": false, 00:17:08.162 "nvme_io": false 00:17:08.162 }, 00:17:08.162 "driver_specific": { 00:17:08.162 "lvol": { 00:17:08.162 "lvol_store_uuid": "4cc4ae03-3d11-45ad-9585-b0df15c76a12", 00:17:08.162 "base_bdev": "aio_bdev", 00:17:08.162 "thin_provision": false, 00:17:08.162 "num_allocated_clusters": 38, 00:17:08.162 "snapshot": false, 00:17:08.162 "clone": false, 00:17:08.162 "esnap_clone": false 00:17:08.162 } 00:17:08.162 } 00:17:08.162 } 00:17:08.162 ] 00:17:08.162 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:17:08.162 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:17:08.162 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:08.438 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:08.438 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:17:08.438 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:08.438 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:08.438 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 00ec9e51-a0b5-49ef-9575-db126ff3ae64 00:17:08.718 00:42:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4cc4ae03-3d11-45ad-9585-b0df15c76a12 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.979 00:17:08.979 real 0m16.860s 00:17:08.979 user 0m44.025s 00:17:08.979 sys 0m3.057s 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:08.979 ************************************ 00:17:08.979 END TEST lvs_grow_dirty 00:17:08.979 ************************************ 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:17:08.979 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:08.979 nvmf_trace.0 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.240 rmmod nvme_tcp 00:17:09.240 rmmod nvme_fabrics 00:17:09.240 rmmod nvme_keyring 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 373435 ']' 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 373435 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 373435 ']' 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 373435 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 373435 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 373435' 00:17:09.240 killing process with pid 373435 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 373435 00:17:09.240 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 373435 00:17:09.501 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.501 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.501 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.501 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.501 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.501 00:42:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.501 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.501 00:42:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.414 00:42:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:11.414 00:17:11.414 real 0m42.901s 00:17:11.414 user 1m5.016s 00:17:11.414 sys 0m9.956s 00:17:11.414 00:42:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:11.414 00:42:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:11.414 ************************************ 00:17:11.414 END TEST nvmf_lvs_grow 00:17:11.414 ************************************ 00:17:11.414 00:42:29 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:11.414 00:42:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:11.414 00:42:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:11.414 00:42:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:11.414 ************************************ 00:17:11.414 START TEST nvmf_bdev_io_wait 00:17:11.414 ************************************ 00:17:11.414 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:11.675 * Looking for test storage... 00:17:11.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.675 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:11.676 00:42:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:19.821 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:19.821 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:19.821 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:19.821 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.821 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:19.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.758 ms 00:17:19.822 00:17:19.822 --- 10.0.0.2 ping statistics --- 00:17:19.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.822 rtt min/avg/max/mdev = 0.758/0.758/0.758/0.000 ms 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:17:19.822 00:17:19.822 --- 10.0.0.1 ping statistics --- 00:17:19.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.822 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=378352 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 378352 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 378352 ']' 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:19.822 00:42:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.822 [2024-06-08 00:42:36.973605] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:19.822 [2024-06-08 00:42:36.973668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.822 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.822 [2024-06-08 00:42:37.043842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:19.822 [2024-06-08 00:42:37.123511] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.822 [2024-06-08 00:42:37.123547] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.822 [2024-06-08 00:42:37.123554] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.822 [2024-06-08 00:42:37.123560] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.822 [2024-06-08 00:42:37.123569] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.822 [2024-06-08 00:42:37.123722] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.822 [2024-06-08 00:42:37.123847] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.822 [2024-06-08 00:42:37.124007] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.822 [2024-06-08 00:42:37.124008] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.822 [2024-06-08 00:42:37.860091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.822 Malloc0 00:17:19.822 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.823 [2024-06-08 00:42:37.930714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=378501 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=378504 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:19.823 { 00:17:19.823 "params": { 00:17:19.823 "name": "Nvme$subsystem", 00:17:19.823 "trtype": "$TEST_TRANSPORT", 00:17:19.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.823 "adrfam": "ipv4", 00:17:19.823 "trsvcid": "$NVMF_PORT", 00:17:19.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.823 "hdgst": ${hdgst:-false}, 00:17:19.823 "ddgst": ${ddgst:-false} 00:17:19.823 }, 00:17:19.823 "method": "bdev_nvme_attach_controller" 00:17:19.823 } 00:17:19.823 EOF 00:17:19.823 )") 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=378506 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:19.823 { 00:17:19.823 "params": { 00:17:19.823 "name": "Nvme$subsystem", 00:17:19.823 "trtype": "$TEST_TRANSPORT", 00:17:19.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.823 "adrfam": "ipv4", 00:17:19.823 "trsvcid": "$NVMF_PORT", 00:17:19.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.823 "hdgst": ${hdgst:-false}, 00:17:19.823 "ddgst": ${ddgst:-false} 00:17:19.823 }, 00:17:19.823 "method": "bdev_nvme_attach_controller" 00:17:19.823 } 00:17:19.823 EOF 00:17:19.823 )") 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=378510 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:19.823 { 00:17:19.823 "params": { 00:17:19.823 "name": "Nvme$subsystem", 00:17:19.823 "trtype": "$TEST_TRANSPORT", 00:17:19.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.823 "adrfam": "ipv4", 00:17:19.823 "trsvcid": "$NVMF_PORT", 00:17:19.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.823 "hdgst": ${hdgst:-false}, 00:17:19.823 "ddgst": ${ddgst:-false} 00:17:19.823 }, 00:17:19.823 "method": "bdev_nvme_attach_controller" 00:17:19.823 } 00:17:19.823 EOF 00:17:19.823 )") 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:19.823 { 00:17:19.823 "params": { 00:17:19.823 "name": "Nvme$subsystem", 00:17:19.823 "trtype": "$TEST_TRANSPORT", 00:17:19.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.823 "adrfam": "ipv4", 00:17:19.823 "trsvcid": "$NVMF_PORT", 00:17:19.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.823 "hdgst": ${hdgst:-false}, 00:17:19.823 "ddgst": ${ddgst:-false} 00:17:19.823 }, 00:17:19.823 "method": "bdev_nvme_attach_controller" 00:17:19.823 } 00:17:19.823 EOF 00:17:19.823 )") 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 378501 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:19.823 "params": { 00:17:19.823 "name": "Nvme1", 00:17:19.823 "trtype": "tcp", 00:17:19.823 "traddr": "10.0.0.2", 00:17:19.823 "adrfam": "ipv4", 00:17:19.823 "trsvcid": "4420", 00:17:19.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.823 "hdgst": false, 00:17:19.823 "ddgst": false 00:17:19.823 }, 00:17:19.823 "method": "bdev_nvme_attach_controller" 00:17:19.823 }' 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:19.823 "params": { 00:17:19.823 "name": "Nvme1", 00:17:19.823 "trtype": "tcp", 00:17:19.823 "traddr": "10.0.0.2", 00:17:19.823 "adrfam": "ipv4", 00:17:19.823 "trsvcid": "4420", 00:17:19.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.823 "hdgst": false, 00:17:19.823 "ddgst": false 00:17:19.823 }, 00:17:19.823 "method": "bdev_nvme_attach_controller" 00:17:19.823 }' 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:19.823 "params": { 00:17:19.823 "name": "Nvme1", 00:17:19.823 "trtype": "tcp", 00:17:19.823 "traddr": "10.0.0.2", 00:17:19.823 "adrfam": "ipv4", 00:17:19.823 "trsvcid": "4420", 00:17:19.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.823 "hdgst": false, 00:17:19.823 "ddgst": false 00:17:19.823 }, 00:17:19.823 "method": "bdev_nvme_attach_controller" 00:17:19.823 }' 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:19.823 00:42:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:19.823 "params": { 00:17:19.823 "name": "Nvme1", 00:17:19.823 "trtype": "tcp", 00:17:19.823 "traddr": "10.0.0.2", 00:17:19.823 "adrfam": "ipv4", 00:17:19.823 "trsvcid": "4420", 00:17:19.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.823 "hdgst": false, 00:17:19.823 "ddgst": false 00:17:19.823 }, 00:17:19.823 "method": "bdev_nvme_attach_controller" 00:17:19.823 }' 00:17:19.823 [2024-06-08 00:42:37.980377] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:19.823 [2024-06-08 00:42:37.980433] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:19.823 [2024-06-08 00:42:37.984656] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:19.824 [2024-06-08 00:42:37.984702] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:19.824 [2024-06-08 00:42:37.984975] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:19.824 [2024-06-08 00:42:37.985017] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:19.824 [2024-06-08 00:42:37.985453] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:19.824 [2024-06-08 00:42:37.985499] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:19.824 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.824 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.085 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.085 [2024-06-08 00:42:38.125128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.085 [2024-06-08 00:42:38.168487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.085 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.085 [2024-06-08 00:42:38.177139] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:17:20.085 [2024-06-08 00:42:38.218064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.085 [2024-06-08 00:42:38.219698] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:17:20.085 [2024-06-08 00:42:38.264486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.085 [2024-06-08 00:42:38.268038] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:17:20.085 [2024-06-08 00:42:38.315412] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:17:20.085 Running I/O for 1 seconds... 00:17:20.345 Running I/O for 1 seconds... 00:17:20.345 Running I/O for 1 seconds... 00:17:20.345 Running I/O for 1 seconds... 00:17:21.288 00:17:21.288 Latency(us) 00:17:21.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.288 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:21.288 Nvme1n1 : 1.01 13885.32 54.24 0.00 0.00 9189.55 5324.80 16602.45 00:17:21.288 =================================================================================================================== 00:17:21.288 Total : 13885.32 54.24 0.00 0.00 9189.55 5324.80 16602.45 00:17:21.288 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 378504 00:17:21.288 00:17:21.288 Latency(us) 00:17:21.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.288 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:21.288 Nvme1n1 : 1.00 189094.95 738.65 0.00 0.00 673.68 269.65 757.76 00:17:21.288 =================================================================================================================== 00:17:21.288 Total : 189094.95 738.65 0.00 0.00 673.68 269.65 757.76 00:17:21.288 00:17:21.288 Latency(us) 00:17:21.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.288 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:21.288 Nvme1n1 : 1.01 8232.02 32.16 0.00 0.00 15469.40 9939.63 30146.56 00:17:21.288 =================================================================================================================== 00:17:21.288 Total : 8232.02 32.16 0.00 0.00 15469.40 9939.63 30146.56 00:17:21.288 00:17:21.288 Latency(us) 00:17:21.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.288 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:21.288 Nvme1n1 : 1.00 13638.65 53.28 0.00 0.00 9360.39 4423.68 22719.15 00:17:21.288 =================================================================================================================== 00:17:21.288 Total : 13638.65 53.28 0.00 0.00 9360.39 4423.68 22719.15 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 378506 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 378510 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.549 rmmod nvme_tcp 00:17:21.549 rmmod nvme_fabrics 00:17:21.549 rmmod nvme_keyring 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 378352 ']' 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 378352 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 378352 ']' 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 378352 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:21.549 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 378352 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 378352' 00:17:21.810 killing process with pid 378352 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 378352 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 378352 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.810 00:42:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.358 00:42:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:24.358 00:17:24.358 real 0m12.347s 00:17:24.358 user 0m18.321s 00:17:24.358 sys 0m6.793s 00:17:24.358 00:42:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:24.358 00:42:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:24.358 ************************************ 00:17:24.358 END TEST nvmf_bdev_io_wait 00:17:24.358 ************************************ 00:17:24.358 00:42:42 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:24.358 00:42:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:24.358 00:42:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:24.358 00:42:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:24.358 ************************************ 00:17:24.359 START TEST nvmf_queue_depth 00:17:24.359 ************************************ 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:24.359 * Looking for test storage... 00:17:24.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:24.359 00:42:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:30.949 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.949 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:30.950 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:30.950 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:30.950 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:17:30.950 00:17:30.950 --- 10.0.0.2 ping statistics --- 00:17:30.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.950 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:17:30.950 00:17:30.950 --- 10.0.0.1 ping statistics --- 00:17:30.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.950 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=383059 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 383059 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 383059 ']' 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:30.950 00:42:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.950 [2024-06-08 00:42:48.985645] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:30.950 [2024-06-08 00:42:48.985711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.950 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.950 [2024-06-08 00:42:49.073449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.950 [2024-06-08 00:42:49.166075] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.950 [2024-06-08 00:42:49.166136] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.950 [2024-06-08 00:42:49.166144] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.950 [2024-06-08 00:42:49.166157] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.950 [2024-06-08 00:42:49.166162] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.950 [2024-06-08 00:42:49.166190] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.522 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:31.522 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:17:31.522 00:42:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.522 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:31.522 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:31.784 [2024-06-08 00:42:49.821713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:31.784 Malloc0 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:31.784 [2024-06-08 00:42:49.893312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=383094 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 383094 /var/tmp/bdevperf.sock 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 383094 ']' 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:31.784 00:42:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:31.784 [2024-06-08 00:42:49.948942] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:31.784 [2024-06-08 00:42:49.949008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383094 ] 00:17:31.784 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.784 [2024-06-08 00:42:50.014418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.045 [2024-06-08 00:42:50.094297] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.616 00:42:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:32.616 00:42:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:17:32.616 00:42:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:32.616 00:42:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.616 00:42:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:32.616 NVMe0n1 00:17:32.616 00:42:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.616 00:42:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:32.876 Running I/O for 10 seconds... 00:17:42.908 00:17:42.908 Latency(us) 00:17:42.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.908 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:42.908 Verification LBA range: start 0x0 length 0x4000 00:17:42.908 NVMe0n1 : 10.06 11598.51 45.31 0.00 0.00 88014.04 24357.55 70778.88 00:17:42.908 =================================================================================================================== 00:17:42.908 Total : 11598.51 45.31 0.00 0.00 88014.04 24357.55 70778.88 00:17:42.908 0 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 383094 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 383094 ']' 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 383094 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 383094 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 383094' 00:17:42.908 killing process with pid 383094 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 383094 00:17:42.908 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.908 00:17:42.908 Latency(us) 00:17:42.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.908 =================================================================================================================== 00:17:42.908 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.908 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 383094 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.169 rmmod nvme_tcp 00:17:43.169 rmmod nvme_fabrics 00:17:43.169 rmmod nvme_keyring 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 383059 ']' 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 383059 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 383059 ']' 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 383059 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 383059 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 383059' 00:17:43.169 killing process with pid 383059 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 383059 00:17:43.169 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 383059 00:17:43.429 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:43.429 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:43.429 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:43.429 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.429 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.429 00:43:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.429 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.429 00:43:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.344 00:43:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:45.344 00:17:45.344 real 0m21.401s 00:17:45.344 user 0m25.216s 00:17:45.344 sys 0m6.231s 00:17:45.344 00:43:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:45.344 00:43:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:45.344 ************************************ 00:17:45.344 END TEST nvmf_queue_depth 00:17:45.344 ************************************ 00:17:45.344 00:43:03 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:45.344 00:43:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:45.344 00:43:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:45.344 00:43:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:45.344 ************************************ 00:17:45.344 START TEST nvmf_target_multipath 00:17:45.344 ************************************ 00:17:45.344 00:43:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:45.606 * Looking for test storage... 00:17:45.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.606 00:43:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:45.607 00:43:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:53.757 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:53.757 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.757 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:53.758 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:53.758 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:53.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:17:53.758 00:17:53.758 --- 10.0.0.2 ping statistics --- 00:17:53.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.758 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:17:53.758 00:17:53.758 --- 10.0.0.1 ping statistics --- 00:17:53.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.758 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:53.758 only one NIC for nvmf test 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.758 rmmod nvme_tcp 00:17:53.758 rmmod nvme_fabrics 00:17:53.758 rmmod nvme_keyring 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.758 00:43:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.143 00:17:55.143 real 0m9.464s 00:17:55.143 user 0m1.967s 00:17:55.143 sys 0m5.400s 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:55.143 00:43:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:55.143 ************************************ 00:17:55.143 END TEST nvmf_target_multipath 00:17:55.143 ************************************ 00:17:55.143 00:43:13 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:55.143 00:43:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:55.143 00:43:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:55.143 00:43:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.143 ************************************ 00:17:55.143 START TEST nvmf_zcopy 00:17:55.143 ************************************ 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:55.143 * Looking for test storage... 00:17:55.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.143 00:43:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:03.288 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:03.288 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:03.288 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:03.288 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:03.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:18:03.288 00:18:03.288 --- 10.0.0.2 ping statistics --- 00:18:03.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.288 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:18:03.288 00:18:03.288 --- 10.0.0.1 ping statistics --- 00:18:03.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.288 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:03.288 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=393719 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 393719 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 393719 ']' 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:03.289 00:43:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.289 [2024-06-08 00:43:20.461895] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:18:03.289 [2024-06-08 00:43:20.461947] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.289 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.289 [2024-06-08 00:43:20.544622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.289 [2024-06-08 00:43:20.619802] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.289 [2024-06-08 00:43:20.619853] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.289 [2024-06-08 00:43:20.619861] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.289 [2024-06-08 00:43:20.619868] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.289 [2024-06-08 00:43:20.619873] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.289 [2024-06-08 00:43:20.619905] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.289 [2024-06-08 00:43:21.283709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.289 [2024-06-08 00:43:21.307983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.289 malloc0 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:03.289 { 00:18:03.289 "params": { 00:18:03.289 "name": "Nvme$subsystem", 00:18:03.289 "trtype": "$TEST_TRANSPORT", 00:18:03.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.289 "adrfam": "ipv4", 00:18:03.289 "trsvcid": "$NVMF_PORT", 00:18:03.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.289 "hdgst": ${hdgst:-false}, 00:18:03.289 "ddgst": ${ddgst:-false} 00:18:03.289 }, 00:18:03.289 "method": "bdev_nvme_attach_controller" 00:18:03.289 } 00:18:03.289 EOF 00:18:03.289 )") 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:03.289 00:43:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:03.289 "params": { 00:18:03.289 "name": "Nvme1", 00:18:03.289 "trtype": "tcp", 00:18:03.289 "traddr": "10.0.0.2", 00:18:03.289 "adrfam": "ipv4", 00:18:03.289 "trsvcid": "4420", 00:18:03.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.289 "hdgst": false, 00:18:03.289 "ddgst": false 00:18:03.289 }, 00:18:03.289 "method": "bdev_nvme_attach_controller" 00:18:03.289 }' 00:18:03.289 [2024-06-08 00:43:21.404199] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:18:03.289 [2024-06-08 00:43:21.404259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393765 ] 00:18:03.289 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.289 [2024-06-08 00:43:21.467950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.289 [2024-06-08 00:43:21.543885] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.861 Running I/O for 10 seconds... 00:18:13.948 00:18:13.949 Latency(us) 00:18:13.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.949 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:13.949 Verification LBA range: start 0x0 length 0x1000 00:18:13.949 Nvme1n1 : 10.01 8132.42 63.53 0.00 0.00 15674.87 1788.59 27525.12 00:18:13.949 =================================================================================================================== 00:18:13.949 Total : 8132.42 63.53 0.00 0.00 15674.87 1788.59 27525.12 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=395836 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:13.949 { 00:18:13.949 "params": { 00:18:13.949 "name": "Nvme$subsystem", 00:18:13.949 "trtype": "$TEST_TRANSPORT", 00:18:13.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.949 "adrfam": "ipv4", 00:18:13.949 "trsvcid": "$NVMF_PORT", 00:18:13.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.949 "hdgst": ${hdgst:-false}, 00:18:13.949 "ddgst": ${ddgst:-false} 00:18:13.949 }, 00:18:13.949 "method": "bdev_nvme_attach_controller" 00:18:13.949 } 00:18:13.949 EOF 00:18:13.949 )") 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:13.949 [2024-06-08 00:43:32.020477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.020508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:13.949 00:43:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:13.949 "params": { 00:18:13.949 "name": "Nvme1", 00:18:13.949 "trtype": "tcp", 00:18:13.949 "traddr": "10.0.0.2", 00:18:13.949 "adrfam": "ipv4", 00:18:13.949 "trsvcid": "4420", 00:18:13.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.949 "hdgst": false, 00:18:13.949 "ddgst": false 00:18:13.949 }, 00:18:13.949 "method": "bdev_nvme_attach_controller" 00:18:13.949 }' 00:18:13.949 [2024-06-08 00:43:32.032467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.032475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.044494] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.044502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.056524] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.056531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.059123] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:18:13.949 [2024-06-08 00:43:32.059170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395836 ] 00:18:13.949 [2024-06-08 00:43:32.068554] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.068562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.080586] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.080594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.949 [2024-06-08 00:43:32.092618] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.092626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.104649] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.104657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.116681] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.116688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.116722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.949 [2024-06-08 00:43:32.128712] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.128722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.140741] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.140750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.152774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.152784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.164805] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.164814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.176836] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.176844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.180207] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.949 [2024-06-08 00:43:32.188867] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.188874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.200903] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.200916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.212929] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.212940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.949 [2024-06-08 00:43:32.224960] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.949 [2024-06-08 00:43:32.224970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 [2024-06-08 00:43:32.236990] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.210 [2024-06-08 00:43:32.236998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 [2024-06-08 00:43:32.249027] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.210 [2024-06-08 00:43:32.249039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 [2024-06-08 00:43:32.261059] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.210 [2024-06-08 00:43:32.261074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 [2024-06-08 00:43:32.273088] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.210 [2024-06-08 00:43:32.273097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 [2024-06-08 00:43:32.285119] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.210 [2024-06-08 00:43:32.285129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 [2024-06-08 00:43:32.297150] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.210 [2024-06-08 00:43:32.297159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 [2024-06-08 00:43:32.309185] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.210 [2024-06-08 00:43:32.309196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 [2024-06-08 00:43:32.351949] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.210 [2024-06-08 00:43:32.351959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 [2024-06-08 00:43:32.361327] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.210 [2024-06-08 00:43:32.361336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 Running I/O for 5 seconds... 00:18:14.210 [2024-06-08 00:43:32.377583] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.210 [2024-06-08 00:43:32.377600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.210 [2024-06-08 00:43:32.393097] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.211 [2024-06-08 00:43:32.393113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.211 [2024-06-08 00:43:32.406527] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.211 [2024-06-08 00:43:32.406543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.211 [2024-06-08 00:43:32.419286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.211 [2024-06-08 00:43:32.419302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.211 [2024-06-08 00:43:32.431429] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.211 [2024-06-08 00:43:32.431445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.211 [2024-06-08 00:43:32.444407] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.211 [2024-06-08 00:43:32.444423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.211 [2024-06-08 00:43:32.457405] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.211 [2024-06-08 00:43:32.457421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.211 [2024-06-08 00:43:32.470237] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.211 [2024-06-08 00:43:32.470254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.211 [2024-06-08 00:43:32.483221] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.211 [2024-06-08 00:43:32.483237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.495761] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.495777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.508884] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.508899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.522249] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.522263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.535711] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.535726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.548624] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.548638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.561709] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.561724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.574573] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.574588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.587785] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.587800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.600770] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.600785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.613349] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.613363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.626454] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.626469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.639551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.639565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.652473] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.652487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.665140] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.665154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.678726] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.678740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.692026] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.692040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.704863] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.704878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.717461] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.717475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.729867] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.729882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.472 [2024-06-08 00:43:32.743336] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.472 [2024-06-08 00:43:32.743350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.756562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.756577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.769912] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.769926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.783325] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.783340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.796514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.796529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.808981] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.808995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.821679] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.821694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.834322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.834336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.847275] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.847290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.860571] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.860585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.873685] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.873699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.886973] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.733 [2024-06-08 00:43:32.886988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.733 [2024-06-08 00:43:32.899782] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.734 [2024-06-08 00:43:32.899796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.734 [2024-06-08 00:43:32.912732] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.734 [2024-06-08 00:43:32.912746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.734 [2024-06-08 00:43:32.925850] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.734 [2024-06-08 00:43:32.925864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.734 [2024-06-08 00:43:32.938907] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.734 [2024-06-08 00:43:32.938921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.734 [2024-06-08 00:43:32.952065] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.734 [2024-06-08 00:43:32.952079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.734 [2024-06-08 00:43:32.965326] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.734 [2024-06-08 00:43:32.965341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.734 [2024-06-08 00:43:32.978811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.734 [2024-06-08 00:43:32.978825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.734 [2024-06-08 00:43:32.992348] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.734 [2024-06-08 00:43:32.992362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.734 [2024-06-08 00:43:33.005464] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.734 [2024-06-08 00:43:33.005479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.993 [2024-06-08 00:43:33.018378] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.993 [2024-06-08 00:43:33.018392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.993 [2024-06-08 00:43:33.031441] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.993 [2024-06-08 00:43:33.031455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.993 [2024-06-08 00:43:33.044683] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.993 [2024-06-08 00:43:33.044697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.993 [2024-06-08 00:43:33.057550] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.993 [2024-06-08 00:43:33.057564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.993 [2024-06-08 00:43:33.071114] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.993 [2024-06-08 00:43:33.071128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.083982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.083996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.096817] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.096831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.110209] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.110224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.123554] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.123569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.136359] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.136373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.149493] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.149507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.162699] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.162713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.175831] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.175845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.189028] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.189043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.202418] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.202432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.215335] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.215349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.228660] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.228674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.241521] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.241535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.254520] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.254534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.994 [2024-06-08 00:43:33.268109] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.994 [2024-06-08 00:43:33.268124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.281473] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.281488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.294252] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.294266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.307322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.307337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.320671] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.320686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.333661] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.333676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.347062] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.347077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.359720] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.359734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.372768] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.372783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.386225] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.386240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.398743] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.398758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.412299] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.412314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.425291] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.425306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.438079] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.438094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.451125] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.451139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.463541] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.463556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.476873] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.476888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.490085] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.490100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.503533] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.503548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.515914] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.515932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.254 [2024-06-08 00:43:33.529471] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.254 [2024-06-08 00:43:33.529485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.542829] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.542844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.556259] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.556275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.568794] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.568810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.582238] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.582253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.595221] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.595236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.608065] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.608080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.620728] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.620743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.633240] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.633255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.646593] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.646608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.659838] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.659853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.672881] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.672895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.686455] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.686470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.699787] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.699801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.713100] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.713115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.726124] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.726139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.739548] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.739563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.752182] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.752196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.764756] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.764774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.777619] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.777633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.515 [2024-06-08 00:43:33.791057] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.515 [2024-06-08 00:43:33.791071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.804462] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.804477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.817600] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.817614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.829988] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.830003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.843294] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.843308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.856327] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.856341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.869833] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.869848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.882844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.882859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.895135] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.895150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.908096] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.908111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.920854] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.920868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.933812] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.933826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.946282] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.946297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.958907] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.958921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.972399] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.972418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.985802] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.985817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:33.998400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:33.998420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:34.011734] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:34.011752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:34.024975] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:34.024991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:34.037836] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:34.037851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.777 [2024-06-08 00:43:34.050679] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.777 [2024-06-08 00:43:34.050694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.063944] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.063958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.077185] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.077200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.089892] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.089906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.102688] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.102703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.115345] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.115359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.128608] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.128623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.141233] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.141248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.154439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.154453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.167595] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.167610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.180509] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.180525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.193684] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.193698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.206776] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.206791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.038 [2024-06-08 00:43:34.219911] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.038 [2024-06-08 00:43:34.219926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.039 [2024-06-08 00:43:34.233080] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.039 [2024-06-08 00:43:34.233094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.039 [2024-06-08 00:43:34.245780] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.039 [2024-06-08 00:43:34.245795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.039 [2024-06-08 00:43:34.258557] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.039 [2024-06-08 00:43:34.258575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.039 [2024-06-08 00:43:34.271900] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.039 [2024-06-08 00:43:34.271914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.039 [2024-06-08 00:43:34.284488] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.039 [2024-06-08 00:43:34.284503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.039 [2024-06-08 00:43:34.297280] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.039 [2024-06-08 00:43:34.297294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.039 [2024-06-08 00:43:34.310368] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.039 [2024-06-08 00:43:34.310383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.322897] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.322912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.335598] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.335612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.348272] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.348286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.361241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.361255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.374465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.374480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.387809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.387824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.400360] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.400374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.413715] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.413729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.426776] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.426791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.440008] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.440022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.453232] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.453247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.466221] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.466236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.479341] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.479355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.492598] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.492613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.505409] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.505424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.518048] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.518063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.531153] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.531168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.544107] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.544122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.557472] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.557487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.300 [2024-06-08 00:43:34.571056] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.300 [2024-06-08 00:43:34.571071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.584270] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.584285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.597195] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.597209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.610620] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.610635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.623290] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.623304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.635994] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.636009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.649137] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.649151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.662831] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.662845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.675572] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.675587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.688371] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.688385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.701987] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.702001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.715248] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.715262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.728610] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.728626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.741941] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.741955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.755178] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.755192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.768301] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.768316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.781877] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.781892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.795294] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.795309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.808275] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.808290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.820839] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.820854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.561 [2024-06-08 00:43:34.834436] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.561 [2024-06-08 00:43:34.834450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.822 [2024-06-08 00:43:34.848161] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.822 [2024-06-08 00:43:34.848175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.822 [2024-06-08 00:43:34.861450] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.822 [2024-06-08 00:43:34.861465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.822 [2024-06-08 00:43:34.874913] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.822 [2024-06-08 00:43:34.874927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.822 [2024-06-08 00:43:34.888022] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.822 [2024-06-08 00:43:34.888036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.822 [2024-06-08 00:43:34.900476] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.822 [2024-06-08 00:43:34.900490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:34.913722] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:34.913736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:34.926474] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:34.926488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:34.939809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:34.939824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:34.952808] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:34.952822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:34.965759] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:34.965773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:34.979201] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:34.979216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:34.992178] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:34.992192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:35.005274] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:35.005289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:35.018719] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:35.018734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:35.031796] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:35.031810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:35.045121] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:35.045135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:35.057959] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:35.057973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:35.071422] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:35.071437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:35.084875] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:35.084889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.823 [2024-06-08 00:43:35.097934] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.823 [2024-06-08 00:43:35.097948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.111086] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.111100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.123712] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.123726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.136589] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.136603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.149991] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.150005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.163322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.163337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.176260] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.176275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.189232] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.189247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.201871] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.201886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.215172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.215187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.228569] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.228584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.241113] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.241127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.254043] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.254058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.266900] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.266914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.280213] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.280228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.293485] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.293500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.306634] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.306649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.319710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.319724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.333070] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.333086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.346444] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.346459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.084 [2024-06-08 00:43:35.358629] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.084 [2024-06-08 00:43:35.358644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.371279] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.371294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.384846] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.384860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.397226] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.397241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.410406] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.410420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.423247] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.423261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.436172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.436187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.448944] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.448959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.462094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.462110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.474956] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.474971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.488465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.488483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.501834] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.501848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.515128] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.515143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.528079] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.528094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.540938] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.540953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.553879] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.553894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.567132] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.567147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.580445] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.580460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.593870] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.593884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.607058] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.607073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.345 [2024-06-08 00:43:35.620294] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.345 [2024-06-08 00:43:35.620309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.633664] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.633679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.646893] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.646908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.659650] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.659665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.672533] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.672548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.685817] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.685832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.699395] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.699415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.712426] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.712441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.725603] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.725617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.739043] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.739062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.752238] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.752252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.765779] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.765793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.779078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.779093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.792264] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.792279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.805480] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.805495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.818677] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.818692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.832085] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.832100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.845443] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.845458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.859086] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.859101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.871784] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.871798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.606 [2024-06-08 00:43:35.885111] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.606 [2024-06-08 00:43:35.885126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:35.897847] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:35.897862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:35.910805] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:35.910820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:35.924312] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:35.924327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:35.937150] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:35.937165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:35.949974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:35.949988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:35.962813] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:35.962828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:35.975726] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:35.975740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:35.988199] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:35.988218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.001214] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.001228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.014515] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.014529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.027883] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.027898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.040605] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.040620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.053237] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.053251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.066156] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.066171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.079418] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.079433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.092832] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.092847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.105861] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.105875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.119348] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.119362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.132515] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.132529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.867 [2024-06-08 00:43:36.145456] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.867 [2024-06-08 00:43:36.145470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.158962] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.158977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.171603] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.171617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.184296] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.184311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.197018] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.197032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.209884] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.209897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.223251] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.223266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.236256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.236274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.249711] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.249725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.263195] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.263209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.276253] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.276267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.289205] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.289220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.302553] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.302567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.315999] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.316013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.329175] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.329189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.342004] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.342019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.354620] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.354635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.368319] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.368333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.381615] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.381630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.395012] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.395026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.128 [2024-06-08 00:43:36.408276] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.128 [2024-06-08 00:43:36.408291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.421388] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.421408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.434008] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.434022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.447058] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.447072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.460294] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.460308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.473361] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.473375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.486720] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.486735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.500061] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.500076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.512897] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.512912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.526130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.526144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.539158] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.539174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.551909] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.551924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.564638] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.564653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.577514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.577528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.590528] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.590542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.603584] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.603599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.616670] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.616684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.629931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.629945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.642961] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.642975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.655663] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.655678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.389 [2024-06-08 00:43:36.667955] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.389 [2024-06-08 00:43:36.667970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.681287] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.681301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.694760] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.694775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.707492] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.707506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.720543] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.720557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.733931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.733947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.747131] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.747146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.760314] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.760329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.773381] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.773395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.785948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.785963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.799063] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.799077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.812271] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.812285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.824884] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.824898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.838299] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.838313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.851682] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.851696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.864390] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.864411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.877485] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.877499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.890577] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.890592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.903884] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.903898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.916902] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.916917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.650 [2024-06-08 00:43:36.930451] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.650 [2024-06-08 00:43:36.930466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:36.943717] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:36.943732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:36.956856] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:36.956870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:36.970339] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:36.970354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:36.983307] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:36.983322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:36.996352] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:36.996366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.009000] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.009015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.022368] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.022383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.035150] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.035165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.048658] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.048673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.062089] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.062105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.075330] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.075345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.087830] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.087844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.100719] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.100734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.113965] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.113980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.126895] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.126910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.139750] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.139764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.153248] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.153263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.165657] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.165672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.179099] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.179115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.911 [2024-06-08 00:43:37.192681] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.911 [2024-06-08 00:43:37.192696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.205774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.205789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.218888] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.218903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.231798] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.231813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.244932] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.244946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.257507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.257522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.270508] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.270523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.283499] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.283514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.296818] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.296833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.310209] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.310224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.323435] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.323450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.336606] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.336621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.349888] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.349903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.363236] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.363250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.376330] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.376346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 00:18:19.172 Latency(us) 00:18:19.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.172 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:19.172 Nvme1n1 : 5.01 19576.56 152.94 0.00 0.00 6532.52 2594.13 17585.49 00:18:19.172 =================================================================================================================== 00:18:19.172 Total : 19576.56 152.94 0.00 0.00 6532.52 2594.13 17585.49 00:18:19.172 [2024-06-08 00:43:37.386207] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.386221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.398236] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.398248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.410276] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.410287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.422304] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.422321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.434331] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.434343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.172 [2024-06-08 00:43:37.446360] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.172 [2024-06-08 00:43:37.446369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.433 [2024-06-08 00:43:37.458391] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.433 [2024-06-08 00:43:37.458400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.433 [2024-06-08 00:43:37.470428] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.433 [2024-06-08 00:43:37.470439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.433 [2024-06-08 00:43:37.482454] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.433 [2024-06-08 00:43:37.482464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.433 [2024-06-08 00:43:37.494484] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.433 [2024-06-08 00:43:37.494495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.433 [2024-06-08 00:43:37.506514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.433 [2024-06-08 00:43:37.506522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.433 [2024-06-08 00:43:37.518544] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.433 [2024-06-08 00:43:37.518553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (395836) - No such process 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 395836 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.433 delay0 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.433 00:43:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:19.433 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.433 [2024-06-08 00:43:37.659035] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:26.015 Initializing NVMe Controllers 00:18:26.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:26.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:26.016 Initialization complete. Launching workers. 00:18:26.016 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 169 00:18:26.016 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 446, failed to submit 43 00:18:26.016 success 263, unsuccess 183, failed 0 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:26.016 rmmod nvme_tcp 00:18:26.016 rmmod nvme_fabrics 00:18:26.016 rmmod nvme_keyring 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 393719 ']' 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 393719 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 393719 ']' 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 393719 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 393719 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 393719' 00:18:26.016 killing process with pid 393719 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 393719 00:18:26.016 00:43:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 393719 00:18:26.016 00:43:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:26.016 00:43:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:26.016 00:43:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:26.016 00:43:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.016 00:43:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.016 00:43:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.016 00:43:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.016 00:43:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.923 00:43:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:27.923 00:18:27.923 real 0m33.024s 00:18:27.923 user 0m44.875s 00:18:27.923 sys 0m9.885s 00:18:27.923 00:43:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:27.923 00:43:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:27.923 ************************************ 00:18:27.923 END TEST nvmf_zcopy 00:18:27.923 ************************************ 00:18:28.184 00:43:46 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:28.184 00:43:46 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:28.184 00:43:46 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:28.184 00:43:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:28.184 ************************************ 00:18:28.184 START TEST nvmf_nmic 00:18:28.184 ************************************ 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:28.184 * Looking for test storage... 00:18:28.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.184 00:43:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:34.769 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:34.769 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:34.769 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:34.769 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:34.769 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:34.769 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:34.769 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:34.769 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:34.769 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:35.030 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:35.030 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:35.030 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:35.030 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.030 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.031 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:35.031 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:35.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:18:35.291 00:18:35.291 --- 10.0.0.2 ping statistics --- 00:18:35.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.291 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:18:35.291 00:18:35.291 --- 10.0.0.1 ping statistics --- 00:18:35.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.291 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=402381 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 402381 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 402381 ']' 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:35.291 00:43:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.291 [2024-06-08 00:43:53.485029] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:18:35.291 [2024-06-08 00:43:53.485095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.291 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.291 [2024-06-08 00:43:53.557556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:35.588 [2024-06-08 00:43:53.632043] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.588 [2024-06-08 00:43:53.632085] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.588 [2024-06-08 00:43:53.632093] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.588 [2024-06-08 00:43:53.632099] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.588 [2024-06-08 00:43:53.632104] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.588 [2024-06-08 00:43:53.632171] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.588 [2024-06-08 00:43:53.632305] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.588 [2024-06-08 00:43:53.632465] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.588 [2024-06-08 00:43:53.632466] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 [2024-06-08 00:43:54.313946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 Malloc0 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 [2024-06-08 00:43:54.373141] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.160 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:36.161 test case1: single bdev can't be used in multiple subsystems 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:36.161 [2024-06-08 00:43:54.409066] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:36.161 [2024-06-08 00:43:54.409085] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:36.161 [2024-06-08 00:43:54.409093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.161 request: 00:18:36.161 { 00:18:36.161 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:36.161 "namespace": { 00:18:36.161 "bdev_name": "Malloc0", 00:18:36.161 "no_auto_visible": false 00:18:36.161 }, 00:18:36.161 "method": "nvmf_subsystem_add_ns", 00:18:36.161 "req_id": 1 00:18:36.161 } 00:18:36.161 Got JSON-RPC error response 00:18:36.161 response: 00:18:36.161 { 00:18:36.161 "code": -32602, 00:18:36.161 "message": "Invalid parameters" 00:18:36.161 } 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:36.161 Adding namespace failed - expected result. 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:36.161 test case2: host connect to nvmf target in multiple paths 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:36.161 [2024-06-08 00:43:54.421196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.161 00:43:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:38.075 00:43:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:39.458 00:43:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:39.458 00:43:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:18:39.458 00:43:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:39.458 00:43:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:39.458 00:43:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:18:41.370 00:43:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:41.370 00:43:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:41.370 00:43:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:41.370 00:43:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:41.370 00:43:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:41.370 00:43:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:18:41.370 00:43:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:41.370 [global] 00:18:41.370 thread=1 00:18:41.370 invalidate=1 00:18:41.370 rw=write 00:18:41.370 time_based=1 00:18:41.370 runtime=1 00:18:41.370 ioengine=libaio 00:18:41.370 direct=1 00:18:41.370 bs=4096 00:18:41.370 iodepth=1 00:18:41.370 norandommap=0 00:18:41.370 numjobs=1 00:18:41.370 00:18:41.370 verify_dump=1 00:18:41.370 verify_backlog=512 00:18:41.370 verify_state_save=0 00:18:41.370 do_verify=1 00:18:41.370 verify=crc32c-intel 00:18:41.370 [job0] 00:18:41.370 filename=/dev/nvme0n1 00:18:41.370 Could not set queue depth (nvme0n1) 00:18:41.630 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.630 fio-3.35 00:18:41.630 Starting 1 thread 00:18:43.012 00:18:43.012 job0: (groupid=0, jobs=1): err= 0: pid=403666: Sat Jun 8 00:44:01 2024 00:18:43.012 read: IOPS=505, BW=2022KiB/s (2071kB/s)(2024KiB/1001msec) 00:18:43.012 slat (nsec): min=23876, max=57896, avg=24779.18, stdev=3120.23 00:18:43.012 clat (usec): min=917, max=1327, avg=1144.13, stdev=60.22 00:18:43.012 lat (usec): min=941, max=1351, avg=1168.91, stdev=60.63 00:18:43.012 clat percentiles (usec): 00:18:43.012 | 1.00th=[ 988], 5.00th=[ 1045], 10.00th=[ 1074], 20.00th=[ 1090], 00:18:43.012 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:18:43.013 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:18:43.013 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1336], 00:18:43.013 | 99.99th=[ 1336] 00:18:43.013 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:43.013 slat (nsec): min=9412, max=67473, avg=28107.12, stdev=9402.75 00:18:43.013 clat (usec): min=382, max=1196, avg=753.71, stdev=105.13 00:18:43.013 lat (usec): min=392, max=1228, avg=781.82, stdev=109.61 00:18:43.013 clat percentiles (usec): 00:18:43.013 | 1.00th=[ 482], 5.00th=[ 553], 10.00th=[ 603], 20.00th=[ 668], 00:18:43.013 | 30.00th=[ 717], 40.00th=[ 742], 50.00th=[ 766], 60.00th=[ 799], 00:18:43.013 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 881], 00:18:43.013 | 99.00th=[ 930], 99.50th=[ 971], 99.90th=[ 1205], 99.95th=[ 1205], 00:18:43.013 | 99.99th=[ 1205] 00:18:43.013 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:43.013 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:43.013 lat (usec) : 500=0.98%, 750=20.24%, 1000=29.67% 00:18:43.013 lat (msec) : 2=49.12% 00:18:43.013 cpu : usr=1.40%, sys=2.90%, ctx=1018, majf=0, minf=1 00:18:43.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.013 issued rwts: total=506,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:43.013 00:18:43.013 Run status group 0 (all jobs): 00:18:43.013 READ: bw=2022KiB/s (2071kB/s), 2022KiB/s-2022KiB/s (2071kB/s-2071kB/s), io=2024KiB (2073kB), run=1001-1001msec 00:18:43.013 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:18:43.013 00:18:43.013 Disk stats (read/write): 00:18:43.013 nvme0n1: ios=469/512, merge=0/0, ticks=526/373, in_queue=899, util=94.29% 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.013 rmmod nvme_tcp 00:18:43.013 rmmod nvme_fabrics 00:18:43.013 rmmod nvme_keyring 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 402381 ']' 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 402381 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 402381 ']' 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 402381 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:43.013 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 402381 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 402381' 00:18:43.273 killing process with pid 402381 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 402381 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 402381 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.273 00:44:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.816 00:44:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.816 00:18:45.816 real 0m17.284s 00:18:45.816 user 0m47.573s 00:18:45.816 sys 0m6.067s 00:18:45.816 00:44:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:45.816 00:44:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.816 ************************************ 00:18:45.816 END TEST nvmf_nmic 00:18:45.816 ************************************ 00:18:45.816 00:44:03 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:45.816 00:44:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:45.816 00:44:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:45.816 00:44:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.816 ************************************ 00:18:45.816 START TEST nvmf_fio_target 00:18:45.816 ************************************ 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:45.816 * Looking for test storage... 00:18:45.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.816 00:44:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.817 00:44:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:52.405 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.405 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:52.405 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:52.406 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:52.406 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.406 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:52.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:18:52.668 00:18:52.668 --- 10.0.0.2 ping statistics --- 00:18:52.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.668 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:18:52.668 00:18:52.668 --- 10.0.0.1 ping statistics --- 00:18:52.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.668 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=408115 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 408115 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 408115 ']' 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:52.668 00:44:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.668 [2024-06-08 00:44:10.799252] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:18:52.668 [2024-06-08 00:44:10.799320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.668 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.668 [2024-06-08 00:44:10.875127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.929 [2024-06-08 00:44:10.951342] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.929 [2024-06-08 00:44:10.951381] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.929 [2024-06-08 00:44:10.951388] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.929 [2024-06-08 00:44:10.951395] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.929 [2024-06-08 00:44:10.951407] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.929 [2024-06-08 00:44:10.951477] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.929 [2024-06-08 00:44:10.951611] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.929 [2024-06-08 00:44:10.951771] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.929 [2024-06-08 00:44:10.951771] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.500 00:44:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:53.500 00:44:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:18:53.500 00:44:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.500 00:44:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:53.500 00:44:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.500 00:44:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.500 00:44:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:53.500 [2024-06-08 00:44:11.757520] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.762 00:44:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.762 00:44:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:53.762 00:44:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.022 00:44:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:54.022 00:44:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.283 00:44:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:54.283 00:44:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.283 00:44:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:54.283 00:44:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:54.543 00:44:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.803 00:44:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:54.803 00:44:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.803 00:44:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:54.803 00:44:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:55.064 00:44:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:55.064 00:44:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:55.324 00:44:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:55.324 00:44:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:55.324 00:44:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:55.585 00:44:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:55.585 00:44:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:55.846 00:44:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.846 [2024-06-08 00:44:14.014965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.846 00:44:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:56.107 00:44:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:56.107 00:44:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:58.065 00:44:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:58.065 00:44:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:18:58.065 00:44:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.065 00:44:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:18:58.065 00:44:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:18:58.065 00:44:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:18:59.977 00:44:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:59.977 00:44:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.977 00:44:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:59.977 00:44:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:18:59.977 00:44:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.977 00:44:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:18:59.977 00:44:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:59.977 [global] 00:18:59.977 thread=1 00:18:59.977 invalidate=1 00:18:59.977 rw=write 00:18:59.977 time_based=1 00:18:59.977 runtime=1 00:18:59.977 ioengine=libaio 00:18:59.977 direct=1 00:18:59.977 bs=4096 00:18:59.977 iodepth=1 00:18:59.977 norandommap=0 00:18:59.977 numjobs=1 00:18:59.977 00:18:59.977 verify_dump=1 00:18:59.977 verify_backlog=512 00:18:59.977 verify_state_save=0 00:18:59.977 do_verify=1 00:18:59.977 verify=crc32c-intel 00:18:59.977 [job0] 00:18:59.977 filename=/dev/nvme0n1 00:18:59.977 [job1] 00:18:59.977 filename=/dev/nvme0n2 00:18:59.977 [job2] 00:18:59.977 filename=/dev/nvme0n3 00:18:59.977 [job3] 00:18:59.977 filename=/dev/nvme0n4 00:18:59.977 Could not set queue depth (nvme0n1) 00:18:59.977 Could not set queue depth (nvme0n2) 00:18:59.977 Could not set queue depth (nvme0n3) 00:18:59.978 Could not set queue depth (nvme0n4) 00:19:00.237 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.237 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.237 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.238 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.238 fio-3.35 00:19:00.238 Starting 4 threads 00:19:01.633 00:19:01.633 job0: (groupid=0, jobs=1): err= 0: pid=409883: Sat Jun 8 00:44:19 2024 00:19:01.633 read: IOPS=16, BW=67.9KiB/s (69.6kB/s)(68.0KiB/1001msec) 00:19:01.633 slat (nsec): min=23132, max=25597, avg=25240.94, stdev=553.39 00:19:01.633 clat (usec): min=857, max=42966, avg=39530.77, stdev=9978.11 00:19:01.633 lat (usec): min=883, max=42991, avg=39556.01, stdev=9978.02 00:19:01.633 clat percentiles (usec): 00:19:01.633 | 1.00th=[ 857], 5.00th=[ 857], 10.00th=[41157], 20.00th=[41681], 00:19:01.633 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:19:01.633 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:19:01.633 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:01.634 | 99.99th=[42730] 00:19:01.634 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:01.634 slat (usec): min=9, max=2863, avg=38.19, stdev=130.02 00:19:01.634 clat (usec): min=189, max=1008, avg=596.93, stdev=168.46 00:19:01.634 lat (usec): min=216, max=3680, avg=635.12, stdev=223.40 00:19:01.634 clat percentiles (usec): 00:19:01.634 | 1.00th=[ 245], 5.00th=[ 314], 10.00th=[ 388], 20.00th=[ 449], 00:19:01.634 | 30.00th=[ 510], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 635], 00:19:01.634 | 70.00th=[ 685], 80.00th=[ 742], 90.00th=[ 832], 95.00th=[ 881], 00:19:01.634 | 99.00th=[ 988], 99.50th=[ 996], 99.90th=[ 1012], 99.95th=[ 1012], 00:19:01.634 | 99.99th=[ 1012] 00:19:01.634 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:19:01.634 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:01.634 lat (usec) : 250=1.13%, 500=25.33%, 750=51.61%, 1000=18.71% 00:19:01.634 lat (msec) : 2=0.19%, 50=3.02% 00:19:01.634 cpu : usr=0.90%, sys=1.50%, ctx=532, majf=0, minf=1 00:19:01.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.634 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.634 job1: (groupid=0, jobs=1): err= 0: pid=409884: Sat Jun 8 00:44:19 2024 00:19:01.634 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:01.634 slat (nsec): min=8998, max=70773, avg=25195.20, stdev=4665.60 00:19:01.634 clat (usec): min=1041, max=1510, avg=1246.39, stdev=71.99 00:19:01.634 lat (usec): min=1057, max=1552, avg=1271.59, stdev=72.02 00:19:01.634 clat percentiles (usec): 00:19:01.634 | 1.00th=[ 1074], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1188], 00:19:01.634 | 30.00th=[ 1205], 40.00th=[ 1237], 50.00th=[ 1254], 60.00th=[ 1270], 00:19:01.634 | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1336], 95.00th=[ 1369], 00:19:01.634 | 99.00th=[ 1401], 99.50th=[ 1434], 99.90th=[ 1516], 99.95th=[ 1516], 00:19:01.634 | 99.99th=[ 1516] 00:19:01.634 write: IOPS=621, BW=2486KiB/s (2545kB/s)(2488KiB/1001msec); 0 zone resets 00:19:01.634 slat (usec): min=9, max=3235, avg=25.76, stdev=129.50 00:19:01.634 clat (usec): min=144, max=1097, avg=523.82, stdev=194.69 00:19:01.634 lat (usec): min=154, max=4083, avg=549.57, stdev=245.93 00:19:01.634 clat percentiles (usec): 00:19:01.634 | 1.00th=[ 161], 5.00th=[ 265], 10.00th=[ 297], 20.00th=[ 343], 00:19:01.634 | 30.00th=[ 404], 40.00th=[ 437], 50.00th=[ 486], 60.00th=[ 545], 00:19:01.634 | 70.00th=[ 635], 80.00th=[ 717], 90.00th=[ 807], 95.00th=[ 857], 00:19:01.634 | 99.00th=[ 938], 99.50th=[ 1020], 99.90th=[ 1106], 99.95th=[ 1106], 00:19:01.634 | 99.99th=[ 1106] 00:19:01.634 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:19:01.634 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:01.634 lat (usec) : 250=2.47%, 500=25.84%, 750=17.46%, 1000=8.73% 00:19:01.634 lat (msec) : 2=45.50% 00:19:01.634 cpu : usr=1.30%, sys=2.70%, ctx=1136, majf=0, minf=1 00:19:01.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.634 issued rwts: total=512,622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.634 job2: (groupid=0, jobs=1): err= 0: pid=409885: Sat Jun 8 00:44:19 2024 00:19:01.634 read: IOPS=497, BW=1990KiB/s (2038kB/s)(1992KiB/1001msec) 00:19:01.634 slat (nsec): min=7463, max=48686, avg=27323.35, stdev=3442.23 00:19:01.634 clat (usec): min=870, max=1421, avg=1154.57, stdev=93.29 00:19:01.634 lat (usec): min=897, max=1448, avg=1181.89, stdev=93.37 00:19:01.634 clat percentiles (usec): 00:19:01.634 | 1.00th=[ 930], 5.00th=[ 1029], 10.00th=[ 1045], 20.00th=[ 1074], 00:19:01.634 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1172], 00:19:01.634 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1303], 00:19:01.634 | 99.00th=[ 1369], 99.50th=[ 1369], 99.90th=[ 1418], 99.95th=[ 1418], 00:19:01.634 | 99.99th=[ 1418] 00:19:01.634 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:01.634 slat (usec): min=9, max=3318, avg=39.66, stdev=145.62 00:19:01.634 clat (usec): min=406, max=1027, avg=748.33, stdev=106.28 00:19:01.634 lat (usec): min=416, max=4086, avg=787.99, stdev=183.39 00:19:01.634 clat percentiles (usec): 00:19:01.634 | 1.00th=[ 498], 5.00th=[ 578], 10.00th=[ 611], 20.00th=[ 668], 00:19:01.634 | 30.00th=[ 693], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 775], 00:19:01.634 | 70.00th=[ 807], 80.00th=[ 840], 90.00th=[ 889], 95.00th=[ 922], 00:19:01.634 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1029], 99.95th=[ 1029], 00:19:01.634 | 99.99th=[ 1029] 00:19:01.634 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:19:01.634 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:01.634 lat (usec) : 500=0.79%, 750=26.14%, 1000=24.65% 00:19:01.634 lat (msec) : 2=48.42% 00:19:01.634 cpu : usr=1.60%, sys=4.60%, ctx=1013, majf=0, minf=1 00:19:01.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.634 issued rwts: total=498,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.634 job3: (groupid=0, jobs=1): err= 0: pid=409886: Sat Jun 8 00:44:19 2024 00:19:01.634 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:01.634 slat (nsec): min=7245, max=57254, avg=26971.98, stdev=4626.52 00:19:01.634 clat (usec): min=541, max=1388, avg=1088.89, stdev=142.68 00:19:01.634 lat (usec): min=569, max=1418, avg=1115.86, stdev=142.65 00:19:01.634 clat percentiles (usec): 00:19:01.634 | 1.00th=[ 766], 5.00th=[ 840], 10.00th=[ 898], 20.00th=[ 979], 00:19:01.634 | 30.00th=[ 1012], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1123], 00:19:01.634 | 70.00th=[ 1172], 80.00th=[ 1237], 90.00th=[ 1270], 95.00th=[ 1287], 00:19:01.634 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1385], 99.95th=[ 1385], 00:19:01.634 | 99.99th=[ 1385] 00:19:01.634 write: IOPS=632, BW=2529KiB/s (2590kB/s)(2532KiB/1001msec); 0 zone resets 00:19:01.634 slat (nsec): min=9404, max=68437, avg=30268.91, stdev=10276.07 00:19:01.634 clat (usec): min=261, max=988, avg=632.65, stdev=117.63 00:19:01.634 lat (usec): min=273, max=1023, avg=662.92, stdev=121.55 00:19:01.634 clat percentiles (usec): 00:19:01.634 | 1.00th=[ 359], 5.00th=[ 416], 10.00th=[ 469], 20.00th=[ 529], 00:19:01.634 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 668], 00:19:01.634 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 807], 00:19:01.634 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 988], 99.95th=[ 988], 00:19:01.634 | 99.99th=[ 988] 00:19:01.634 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:19:01.634 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:01.634 lat (usec) : 500=7.77%, 750=38.86%, 1000=19.91% 00:19:01.634 lat (msec) : 2=33.45% 00:19:01.634 cpu : usr=1.90%, sys=4.90%, ctx=1146, majf=0, minf=1 00:19:01.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.634 issued rwts: total=512,633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.634 00:19:01.634 Run status group 0 (all jobs): 00:19:01.634 READ: bw=6150KiB/s (6297kB/s), 67.9KiB/s-2046KiB/s (69.6kB/s-2095kB/s), io=6156KiB (6304kB), run=1001-1001msec 00:19:01.634 WRITE: bw=9107KiB/s (9325kB/s), 2046KiB/s-2529KiB/s (2095kB/s-2590kB/s), io=9116KiB (9335kB), run=1001-1001msec 00:19:01.634 00:19:01.634 Disk stats (read/write): 00:19:01.634 nvme0n1: ios=55/512, merge=0/0, ticks=569/280, in_queue=849, util=86.77% 00:19:01.634 nvme0n2: ios=494/512, merge=0/0, ticks=744/243, in_queue=987, util=90.82% 00:19:01.634 nvme0n3: ios=405/512, merge=0/0, ticks=653/323, in_queue=976, util=95.14% 00:19:01.634 nvme0n4: ios=485/512, merge=0/0, ticks=861/266, in_queue=1127, util=94.34% 00:19:01.634 00:44:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:01.634 [global] 00:19:01.634 thread=1 00:19:01.634 invalidate=1 00:19:01.634 rw=randwrite 00:19:01.634 time_based=1 00:19:01.634 runtime=1 00:19:01.634 ioengine=libaio 00:19:01.634 direct=1 00:19:01.634 bs=4096 00:19:01.634 iodepth=1 00:19:01.634 norandommap=0 00:19:01.634 numjobs=1 00:19:01.634 00:19:01.634 verify_dump=1 00:19:01.634 verify_backlog=512 00:19:01.634 verify_state_save=0 00:19:01.634 do_verify=1 00:19:01.634 verify=crc32c-intel 00:19:01.634 [job0] 00:19:01.634 filename=/dev/nvme0n1 00:19:01.634 [job1] 00:19:01.634 filename=/dev/nvme0n2 00:19:01.634 [job2] 00:19:01.634 filename=/dev/nvme0n3 00:19:01.634 [job3] 00:19:01.634 filename=/dev/nvme0n4 00:19:01.634 Could not set queue depth (nvme0n1) 00:19:01.634 Could not set queue depth (nvme0n2) 00:19:01.634 Could not set queue depth (nvme0n3) 00:19:01.634 Could not set queue depth (nvme0n4) 00:19:01.895 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:01.895 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:01.895 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:01.895 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:01.895 fio-3.35 00:19:01.895 Starting 4 threads 00:19:03.303 00:19:03.303 job0: (groupid=0, jobs=1): err= 0: pid=410410: Sat Jun 8 00:44:21 2024 00:19:03.303 read: IOPS=16, BW=66.4KiB/s (68.0kB/s)(68.0KiB/1024msec) 00:19:03.303 slat (nsec): min=26246, max=45960, avg=29286.18, stdev=6223.64 00:19:03.303 clat (usec): min=947, max=42024, avg=34550.69, stdev=15972.31 00:19:03.303 lat (usec): min=993, max=42051, avg=34579.97, stdev=15970.34 00:19:03.303 clat percentiles (usec): 00:19:03.303 | 1.00th=[ 947], 5.00th=[ 947], 10.00th=[ 1090], 20.00th=[41157], 00:19:03.303 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:19:03.303 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:03.303 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:03.303 | 99.99th=[42206] 00:19:03.303 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:19:03.303 slat (nsec): min=9583, max=69904, avg=32243.68, stdev=7985.16 00:19:03.303 clat (usec): min=451, max=1171, avg=810.46, stdev=116.57 00:19:03.303 lat (usec): min=484, max=1203, avg=842.70, stdev=118.33 00:19:03.303 clat percentiles (usec): 00:19:03.303 | 1.00th=[ 494], 5.00th=[ 611], 10.00th=[ 660], 20.00th=[ 717], 00:19:03.303 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 807], 60.00th=[ 848], 00:19:03.303 | 70.00th=[ 873], 80.00th=[ 914], 90.00th=[ 955], 95.00th=[ 988], 00:19:03.303 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1172], 99.95th=[ 1172], 00:19:03.303 | 99.99th=[ 1172] 00:19:03.303 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:19:03.303 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:03.303 lat (usec) : 500=1.13%, 750=27.41%, 1000=65.78% 00:19:03.303 lat (msec) : 2=3.02%, 50=2.65% 00:19:03.303 cpu : usr=0.78%, sys=2.35%, ctx=533, majf=0, minf=1 00:19:03.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.303 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.303 job1: (groupid=0, jobs=1): err= 0: pid=410411: Sat Jun 8 00:44:21 2024 00:19:03.303 read: IOPS=449, BW=1798KiB/s (1841kB/s)(1800KiB/1001msec) 00:19:03.303 slat (nsec): min=24768, max=61735, avg=26266.52, stdev=4191.36 00:19:03.303 clat (usec): min=890, max=1271, avg=1131.29, stdev=45.03 00:19:03.303 lat (usec): min=915, max=1299, avg=1157.56, stdev=45.28 00:19:03.303 clat percentiles (usec): 00:19:03.303 | 1.00th=[ 1004], 5.00th=[ 1057], 10.00th=[ 1090], 20.00th=[ 1106], 00:19:03.303 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1139], 00:19:03.303 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1172], 95.00th=[ 1205], 00:19:03.303 | 99.00th=[ 1254], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:19:03.303 | 99.99th=[ 1270] 00:19:03.303 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:03.303 slat (nsec): min=9714, max=52032, avg=32649.60, stdev=5209.95 00:19:03.303 clat (usec): min=495, max=1089, avg=887.34, stdev=87.29 00:19:03.303 lat (usec): min=504, max=1121, avg=919.99, stdev=88.38 00:19:03.303 clat percentiles (usec): 00:19:03.303 | 1.00th=[ 619], 5.00th=[ 742], 10.00th=[ 775], 20.00th=[ 816], 00:19:03.303 | 30.00th=[ 865], 40.00th=[ 889], 50.00th=[ 906], 60.00th=[ 922], 00:19:03.303 | 70.00th=[ 938], 80.00th=[ 955], 90.00th=[ 971], 95.00th=[ 988], 00:19:03.303 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1090], 99.95th=[ 1090], 00:19:03.303 | 99.99th=[ 1090] 00:19:03.303 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:19:03.303 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:03.303 lat (usec) : 500=0.10%, 750=3.74%, 1000=48.23% 00:19:03.303 lat (msec) : 2=47.92% 00:19:03.303 cpu : usr=1.40%, sys=3.10%, ctx=963, majf=0, minf=1 00:19:03.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.303 issued rwts: total=450,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.303 job2: (groupid=0, jobs=1): err= 0: pid=410414: Sat Jun 8 00:44:21 2024 00:19:03.303 read: IOPS=14, BW=58.3KiB/s (59.7kB/s)(60.0KiB/1030msec) 00:19:03.303 slat (nsec): min=24546, max=25375, avg=24863.67, stdev=217.40 00:19:03.303 clat (usec): min=1427, max=42107, avg=39245.72, stdev=10462.59 00:19:03.303 lat (usec): min=1452, max=42132, avg=39270.58, stdev=10462.56 00:19:03.303 clat percentiles (usec): 00:19:03.303 | 1.00th=[ 1434], 5.00th=[ 1434], 10.00th=[41681], 20.00th=[41681], 00:19:03.303 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:03.304 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:03.304 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:03.304 | 99.99th=[42206] 00:19:03.304 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:19:03.304 slat (nsec): min=9342, max=65926, avg=29780.39, stdev=6858.35 00:19:03.304 clat (usec): min=462, max=1205, avg=822.86, stdev=132.85 00:19:03.304 lat (usec): min=472, max=1235, avg=852.64, stdev=134.43 00:19:03.304 clat percentiles (usec): 00:19:03.304 | 1.00th=[ 506], 5.00th=[ 603], 10.00th=[ 652], 20.00th=[ 717], 00:19:03.304 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 857], 00:19:03.304 | 70.00th=[ 898], 80.00th=[ 938], 90.00th=[ 988], 95.00th=[ 1037], 00:19:03.304 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1205], 99.95th=[ 1205], 00:19:03.304 | 99.99th=[ 1205] 00:19:03.304 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:19:03.304 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:03.304 lat (usec) : 500=0.57%, 750=28.08%, 1000=59.39% 00:19:03.304 lat (msec) : 2=9.30%, 50=2.66% 00:19:03.304 cpu : usr=0.68%, sys=1.55%, ctx=527, majf=0, minf=1 00:19:03.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.304 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.304 job3: (groupid=0, jobs=1): err= 0: pid=410415: Sat Jun 8 00:44:21 2024 00:19:03.304 read: IOPS=17, BW=69.1KiB/s (70.8kB/s)(72.0KiB/1042msec) 00:19:03.304 slat (nsec): min=10688, max=25841, avg=24527.83, stdev=3467.07 00:19:03.304 clat (usec): min=40903, max=42036, avg=41520.68, stdev=511.91 00:19:03.304 lat (usec): min=40928, max=42062, avg=41545.21, stdev=511.60 00:19:03.304 clat percentiles (usec): 00:19:03.304 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:03.304 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:19:03.304 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:03.304 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:03.304 | 99.99th=[42206] 00:19:03.304 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:19:03.304 slat (nsec): min=9397, max=51491, avg=29854.36, stdev=7133.34 00:19:03.304 clat (usec): min=225, max=922, avg=536.16, stdev=115.93 00:19:03.304 lat (usec): min=237, max=953, avg=566.02, stdev=117.37 00:19:03.304 clat percentiles (usec): 00:19:03.304 | 1.00th=[ 334], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 445], 00:19:03.304 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 486], 60.00th=[ 529], 00:19:03.304 | 70.00th=[ 611], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 725], 00:19:03.304 | 99.00th=[ 824], 99.50th=[ 881], 99.90th=[ 922], 99.95th=[ 922], 00:19:03.304 | 99.99th=[ 922] 00:19:03.304 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:19:03.304 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:03.304 lat (usec) : 250=0.19%, 500=53.02%, 750=40.75%, 1000=2.64% 00:19:03.304 lat (msec) : 50=3.40% 00:19:03.304 cpu : usr=0.77%, sys=1.44%, ctx=530, majf=0, minf=1 00:19:03.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.304 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.304 00:19:03.304 Run status group 0 (all jobs): 00:19:03.304 READ: bw=1919KiB/s (1965kB/s), 58.3KiB/s-1798KiB/s (59.7kB/s-1841kB/s), io=2000KiB (2048kB), run=1001-1042msec 00:19:03.304 WRITE: bw=7862KiB/s (8050kB/s), 1965KiB/s-2046KiB/s (2013kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1042msec 00:19:03.304 00:19:03.304 Disk stats (read/write): 00:19:03.304 nvme0n1: ios=60/512, merge=0/0, ticks=568/306, in_queue=874, util=89.18% 00:19:03.304 nvme0n2: ios=313/512, merge=0/0, ticks=637/406, in_queue=1043, util=94.32% 00:19:03.304 nvme0n3: ios=65/512, merge=0/0, ticks=411/396, in_queue=807, util=90.57% 00:19:03.304 nvme0n4: ios=17/512, merge=0/0, ticks=706/267, in_queue=973, util=91.40% 00:19:03.304 00:44:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:03.304 [global] 00:19:03.304 thread=1 00:19:03.304 invalidate=1 00:19:03.304 rw=write 00:19:03.304 time_based=1 00:19:03.304 runtime=1 00:19:03.304 ioengine=libaio 00:19:03.304 direct=1 00:19:03.304 bs=4096 00:19:03.304 iodepth=128 00:19:03.304 norandommap=0 00:19:03.304 numjobs=1 00:19:03.304 00:19:03.304 verify_dump=1 00:19:03.304 verify_backlog=512 00:19:03.304 verify_state_save=0 00:19:03.304 do_verify=1 00:19:03.304 verify=crc32c-intel 00:19:03.304 [job0] 00:19:03.304 filename=/dev/nvme0n1 00:19:03.304 [job1] 00:19:03.304 filename=/dev/nvme0n2 00:19:03.304 [job2] 00:19:03.304 filename=/dev/nvme0n3 00:19:03.304 [job3] 00:19:03.304 filename=/dev/nvme0n4 00:19:03.304 Could not set queue depth (nvme0n1) 00:19:03.304 Could not set queue depth (nvme0n2) 00:19:03.304 Could not set queue depth (nvme0n3) 00:19:03.304 Could not set queue depth (nvme0n4) 00:19:03.571 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:03.571 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:03.571 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:03.571 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:03.571 fio-3.35 00:19:03.571 Starting 4 threads 00:19:04.978 00:19:04.978 job0: (groupid=0, jobs=1): err= 0: pid=410934: Sat Jun 8 00:44:22 2024 00:19:04.978 read: IOPS=4257, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1002msec) 00:19:04.978 slat (nsec): min=924, max=13285k, avg=100753.50, stdev=700310.07 00:19:04.978 clat (usec): min=715, max=36455, avg=12633.01, stdev=4775.47 00:19:04.978 lat (usec): min=4377, max=36465, avg=12733.76, stdev=4823.79 00:19:04.978 clat percentiles (usec): 00:19:04.978 | 1.00th=[ 5342], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9765], 00:19:04.978 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:19:04.978 | 70.00th=[13304], 80.00th=[14353], 90.00th=[17433], 95.00th=[22676], 00:19:04.978 | 99.00th=[32900], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:19:04.978 | 99.99th=[36439] 00:19:04.978 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:19:04.978 slat (nsec): min=1514, max=9517.6k, avg=110744.99, stdev=563217.51 00:19:04.978 clat (usec): min=1274, max=44056, avg=15881.85, stdev=9371.29 00:19:04.978 lat (usec): min=1286, max=44063, avg=15992.59, stdev=9433.38 00:19:04.978 clat percentiles (usec): 00:19:04.978 | 1.00th=[ 3064], 5.00th=[ 5669], 10.00th=[ 7308], 20.00th=[ 8094], 00:19:04.978 | 30.00th=[ 9241], 40.00th=[11076], 50.00th=[12256], 60.00th=[14353], 00:19:04.978 | 70.00th=[19530], 80.00th=[25297], 90.00th=[30802], 95.00th=[35390], 00:19:04.978 | 99.00th=[39060], 99.50th=[39584], 99.90th=[44303], 99.95th=[44303], 00:19:04.978 | 99.99th=[44303] 00:19:04.978 bw ( KiB/s): min=16944, max=16944, per=20.62%, avg=16944.00, stdev= 0.00, samples=1 00:19:04.978 iops : min= 4236, max= 4236, avg=4236.00, stdev= 0.00, samples=1 00:19:04.978 lat (usec) : 750=0.01% 00:19:04.978 lat (msec) : 2=0.11%, 4=0.78%, 10=27.33%, 20=53.68%, 50=18.09% 00:19:04.978 cpu : usr=3.40%, sys=3.70%, ctx=450, majf=0, minf=1 00:19:04.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:04.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.978 issued rwts: total=4266,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.978 job1: (groupid=0, jobs=1): err= 0: pid=410935: Sat Jun 8 00:44:22 2024 00:19:04.978 read: IOPS=5981, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1005msec) 00:19:04.978 slat (nsec): min=862, max=19428k, avg=75923.61, stdev=577622.21 00:19:04.978 clat (usec): min=1504, max=47044, avg=9973.56, stdev=5643.61 00:19:04.978 lat (usec): min=2954, max=47380, avg=10049.49, stdev=5677.70 00:19:04.978 clat percentiles (usec): 00:19:04.978 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 5145], 20.00th=[ 6194], 00:19:04.978 | 30.00th=[ 6783], 40.00th=[ 7832], 50.00th=[ 8586], 60.00th=[ 9241], 00:19:04.978 | 70.00th=[10028], 80.00th=[12780], 90.00th=[17171], 95.00th=[21627], 00:19:04.978 | 99.00th=[32113], 99.50th=[32375], 99.90th=[46924], 99.95th=[46924], 00:19:04.978 | 99.99th=[46924] 00:19:04.978 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:19:04.978 slat (nsec): min=1516, max=19973k, avg=79737.93, stdev=627530.07 00:19:04.978 clat (usec): min=1272, max=68924, avg=10957.23, stdev=7611.19 00:19:04.978 lat (usec): min=1281, max=68927, avg=11036.97, stdev=7651.77 00:19:04.978 clat percentiles (usec): 00:19:04.978 | 1.00th=[ 3163], 5.00th=[ 3982], 10.00th=[ 4817], 20.00th=[ 6259], 00:19:04.978 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 8455], 60.00th=[ 9634], 00:19:04.978 | 70.00th=[10945], 80.00th=[13042], 90.00th=[22676], 95.00th=[28181], 00:19:04.978 | 99.00th=[38536], 99.50th=[41681], 99.90th=[46400], 99.95th=[68682], 00:19:04.978 | 99.99th=[68682] 00:19:04.978 bw ( KiB/s): min=22664, max=26488, per=29.91%, avg=24576.00, stdev=2703.98, samples=2 00:19:04.978 iops : min= 5666, max= 6622, avg=6144.00, stdev=675.99, samples=2 00:19:04.978 lat (msec) : 2=0.08%, 4=4.11%, 10=62.15%, 20=24.10%, 50=9.54% 00:19:04.978 lat (msec) : 100=0.03% 00:19:04.978 cpu : usr=3.98%, sys=4.78%, ctx=620, majf=0, minf=1 00:19:04.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:04.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.978 issued rwts: total=6011,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.978 job2: (groupid=0, jobs=1): err= 0: pid=410936: Sat Jun 8 00:44:22 2024 00:19:04.978 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:19:04.978 slat (nsec): min=872, max=23445k, avg=130781.31, stdev=950924.96 00:19:04.978 clat (usec): min=3944, max=76204, avg=16763.01, stdev=14618.17 00:19:04.978 lat (usec): min=3951, max=76279, avg=16893.79, stdev=14697.20 00:19:04.978 clat percentiles (usec): 00:19:04.978 | 1.00th=[ 4621], 5.00th=[ 6718], 10.00th=[ 7635], 20.00th=[ 8225], 00:19:04.978 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10945], 00:19:04.978 | 70.00th=[15270], 80.00th=[27657], 90.00th=[35914], 95.00th=[49021], 00:19:04.978 | 99.00th=[72877], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:19:04.978 | 99.99th=[76022] 00:19:04.978 write: IOPS=4235, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1005msec); 0 zone resets 00:19:04.978 slat (nsec): min=1535, max=23793k, avg=101331.88, stdev=827951.29 00:19:04.978 clat (usec): min=723, max=77248, avg=13810.34, stdev=11811.71 00:19:04.978 lat (usec): min=1446, max=77258, avg=13911.67, stdev=11883.91 00:19:04.978 clat percentiles (usec): 00:19:04.978 | 1.00th=[ 2802], 5.00th=[ 4817], 10.00th=[ 5473], 20.00th=[ 7242], 00:19:04.978 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10421], 00:19:04.978 | 70.00th=[12256], 80.00th=[16319], 90.00th=[28443], 95.00th=[41681], 00:19:04.978 | 99.00th=[60556], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:19:04.978 | 99.99th=[77071] 00:19:04.978 bw ( KiB/s): min=15416, max=17616, per=20.10%, avg=16516.00, stdev=1555.63, samples=2 00:19:04.978 iops : min= 3854, max= 4404, avg=4129.00, stdev=388.91, samples=2 00:19:04.978 lat (usec) : 750=0.01% 00:19:04.978 lat (msec) : 2=0.12%, 4=1.63%, 10=49.68%, 20=29.19%, 50=15.28% 00:19:04.978 lat (msec) : 100=4.09% 00:19:04.978 cpu : usr=2.59%, sys=4.48%, ctx=313, majf=0, minf=2 00:19:04.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:04.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.978 issued rwts: total=4096,4257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.978 job3: (groupid=0, jobs=1): err= 0: pid=410937: Sat Jun 8 00:44:22 2024 00:19:04.978 read: IOPS=5216, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec) 00:19:04.978 slat (nsec): min=876, max=29135k, avg=80148.20, stdev=745153.15 00:19:04.978 clat (usec): min=2898, max=74327, avg=11508.22, stdev=9845.61 00:19:04.978 lat (usec): min=2904, max=74334, avg=11588.37, stdev=9894.45 00:19:04.978 clat percentiles (usec): 00:19:04.978 | 1.00th=[ 3752], 5.00th=[ 4359], 10.00th=[ 5604], 20.00th=[ 6849], 00:19:04.978 | 30.00th=[ 7308], 40.00th=[ 8029], 50.00th=[ 9241], 60.00th=[ 9896], 00:19:04.978 | 70.00th=[10421], 80.00th=[12125], 90.00th=[18744], 95.00th=[25822], 00:19:04.978 | 99.00th=[58459], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:19:04.978 | 99.99th=[73925] 00:19:04.978 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:19:04.978 slat (nsec): min=1540, max=19100k, avg=88294.06, stdev=677279.68 00:19:04.978 clat (usec): min=1066, max=55367, avg=11925.13, stdev=9019.32 00:19:04.978 lat (usec): min=1074, max=55371, avg=12013.42, stdev=9083.54 00:19:04.978 clat percentiles (usec): 00:19:04.978 | 1.00th=[ 2802], 5.00th=[ 4752], 10.00th=[ 5735], 20.00th=[ 6587], 00:19:04.978 | 30.00th=[ 7635], 40.00th=[ 8455], 50.00th=[ 9372], 60.00th=[10421], 00:19:04.978 | 70.00th=[12387], 80.00th=[13698], 90.00th=[17957], 95.00th=[35914], 00:19:04.978 | 99.00th=[53216], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:19:04.978 | 99.99th=[55313] 00:19:04.978 bw ( KiB/s): min=20928, max=24048, per=27.37%, avg=22488.00, stdev=2206.17, samples=2 00:19:04.978 iops : min= 5232, max= 6012, avg=5622.00, stdev=551.54, samples=2 00:19:04.978 lat (msec) : 2=0.16%, 4=2.59%, 10=57.27%, 20=31.57%, 50=6.84% 00:19:04.978 lat (msec) : 100=1.57% 00:19:04.978 cpu : usr=3.69%, sys=4.99%, ctx=568, majf=0, minf=1 00:19:04.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:04.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.978 issued rwts: total=5237,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.978 00:19:04.978 Run status group 0 (all jobs): 00:19:04.978 READ: bw=76.2MiB/s (79.9MB/s), 15.9MiB/s-23.4MiB/s (16.7MB/s-24.5MB/s), io=76.6MiB (80.3MB), run=1002-1005msec 00:19:04.978 WRITE: bw=80.2MiB/s (84.1MB/s), 16.5MiB/s-23.9MiB/s (17.3MB/s-25.0MB/s), io=80.6MiB (84.5MB), run=1002-1005msec 00:19:04.978 00:19:04.978 Disk stats (read/write): 00:19:04.978 nvme0n1: ios=3634/3703, merge=0/0, ticks=44346/57610, in_queue=101956, util=87.98% 00:19:04.979 nvme0n2: ios=5096/5120, merge=0/0, ticks=28947/34399, in_queue=63346, util=91.74% 00:19:04.979 nvme0n3: ios=3035/3072, merge=0/0, ticks=22142/20477, in_queue=42619, util=94.84% 00:19:04.979 nvme0n4: ios=4096/4608, merge=0/0, ticks=26187/31123, in_queue=57310, util=87.63% 00:19:04.979 00:44:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:04.979 [global] 00:19:04.979 thread=1 00:19:04.979 invalidate=1 00:19:04.979 rw=randwrite 00:19:04.979 time_based=1 00:19:04.979 runtime=1 00:19:04.979 ioengine=libaio 00:19:04.979 direct=1 00:19:04.979 bs=4096 00:19:04.979 iodepth=128 00:19:04.979 norandommap=0 00:19:04.979 numjobs=1 00:19:04.979 00:19:04.979 verify_dump=1 00:19:04.979 verify_backlog=512 00:19:04.979 verify_state_save=0 00:19:04.979 do_verify=1 00:19:04.979 verify=crc32c-intel 00:19:04.979 [job0] 00:19:04.979 filename=/dev/nvme0n1 00:19:04.979 [job1] 00:19:04.979 filename=/dev/nvme0n2 00:19:04.979 [job2] 00:19:04.979 filename=/dev/nvme0n3 00:19:04.979 [job3] 00:19:04.979 filename=/dev/nvme0n4 00:19:04.979 Could not set queue depth (nvme0n1) 00:19:04.979 Could not set queue depth (nvme0n2) 00:19:04.979 Could not set queue depth (nvme0n3) 00:19:04.979 Could not set queue depth (nvme0n4) 00:19:05.248 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.248 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.248 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.248 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.248 fio-3.35 00:19:05.248 Starting 4 threads 00:19:06.657 00:19:06.657 job0: (groupid=0, jobs=1): err= 0: pid=411458: Sat Jun 8 00:44:24 2024 00:19:06.657 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:19:06.657 slat (nsec): min=1376, max=12275k, avg=147788.83, stdev=993224.97 00:19:06.657 clat (usec): min=6010, max=75412, avg=17097.63, stdev=11173.06 00:19:06.657 lat (usec): min=6015, max=75419, avg=17245.42, stdev=11275.20 00:19:06.657 clat percentiles (usec): 00:19:06.657 | 1.00th=[ 7898], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11469], 00:19:06.657 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13829], 60.00th=[14091], 00:19:06.657 | 70.00th=[15270], 80.00th=[18220], 90.00th=[25035], 95.00th=[46924], 00:19:06.657 | 99.00th=[66847], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:19:06.657 | 99.99th=[74974] 00:19:06.657 write: IOPS=2986, BW=11.7MiB/s (12.2MB/s)(11.8MiB/1012msec); 0 zone resets 00:19:06.657 slat (nsec): min=1648, max=11122k, avg=198618.55, stdev=991091.26 00:19:06.657 clat (msec): min=3, max=101, avg=27.89, stdev=25.16 00:19:06.657 lat (msec): min=3, max=101, avg=28.09, stdev=25.32 00:19:06.657 clat percentiles (msec): 00:19:06.657 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 9], 00:19:06.657 | 30.00th=[ 10], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 22], 00:19:06.657 | 70.00th=[ 40], 80.00th=[ 54], 90.00th=[ 69], 95.00th=[ 81], 00:19:06.657 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 102], 99.95th=[ 102], 00:19:06.657 | 99.99th=[ 102] 00:19:06.657 bw ( KiB/s): min=11440, max=11720, per=14.99%, avg=11580.00, stdev=197.99, samples=2 00:19:06.657 iops : min= 2860, max= 2930, avg=2895.00, stdev=49.50, samples=2 00:19:06.657 lat (msec) : 4=0.73%, 10=20.21%, 20=50.30%, 50=14.39%, 100=14.26% 00:19:06.657 lat (msec) : 250=0.11% 00:19:06.657 cpu : usr=2.27%, sys=3.66%, ctx=277, majf=0, minf=1 00:19:06.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:06.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.657 issued rwts: total=2560,3022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.657 job1: (groupid=0, jobs=1): err= 0: pid=411459: Sat Jun 8 00:44:24 2024 00:19:06.657 read: IOPS=8787, BW=34.3MiB/s (36.0MB/s)(34.5MiB/1005msec) 00:19:06.657 slat (nsec): min=942, max=19716k, avg=53610.06, stdev=465764.94 00:19:06.657 clat (usec): min=2091, max=63628, avg=7117.54, stdev=5696.52 00:19:06.657 lat (usec): min=2842, max=63655, avg=7171.15, stdev=5736.20 00:19:06.657 clat percentiles (usec): 00:19:06.657 | 1.00th=[ 3720], 5.00th=[ 4752], 10.00th=[ 5145], 20.00th=[ 5342], 00:19:06.657 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 6259], 00:19:06.657 | 70.00th=[ 6718], 80.00th=[ 7308], 90.00th=[ 8717], 95.00th=[ 9634], 00:19:06.657 | 99.00th=[41681], 99.50th=[50594], 99.90th=[58983], 99.95th=[58983], 00:19:06.657 | 99.99th=[63701] 00:19:06.657 write: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(36.0MiB/1005msec); 0 zone resets 00:19:06.657 slat (nsec): min=1593, max=10927k, avg=52380.76, stdev=347173.68 00:19:06.657 clat (usec): min=1209, max=82365, avg=6987.46, stdev=8323.43 00:19:06.657 lat (usec): min=1219, max=82393, avg=7039.84, stdev=8372.22 00:19:06.657 clat percentiles (usec): 00:19:06.657 | 1.00th=[ 2278], 5.00th=[ 2900], 10.00th=[ 3589], 20.00th=[ 4490], 00:19:06.657 | 30.00th=[ 5080], 40.00th=[ 5276], 50.00th=[ 5473], 60.00th=[ 5669], 00:19:06.657 | 70.00th=[ 5800], 80.00th=[ 6063], 90.00th=[ 8455], 95.00th=[11600], 00:19:06.657 | 99.00th=[55313], 99.50th=[69731], 99.90th=[81265], 99.95th=[82314], 00:19:06.657 | 99.99th=[82314] 00:19:06.657 bw ( KiB/s): min=28672, max=45048, per=47.72%, avg=36860.00, stdev=11579.58, samples=2 00:19:06.657 iops : min= 7168, max=11262, avg=9215.00, stdev=2894.90, samples=2 00:19:06.657 lat (msec) : 2=0.28%, 4=7.75%, 10=86.70%, 20=2.22%, 50=2.13% 00:19:06.657 lat (msec) : 100=0.93% 00:19:06.657 cpu : usr=4.68%, sys=6.08%, ctx=835, majf=0, minf=1 00:19:06.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:06.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.657 issued rwts: total=8831,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.657 job2: (groupid=0, jobs=1): err= 0: pid=411460: Sat Jun 8 00:44:24 2024 00:19:06.657 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:19:06.657 slat (nsec): min=913, max=16853k, avg=147476.47, stdev=1114204.19 00:19:06.657 clat (usec): min=4441, max=74411, avg=19603.05, stdev=11219.18 00:19:06.657 lat (usec): min=4754, max=80324, avg=19750.53, stdev=11332.78 00:19:06.657 clat percentiles (usec): 00:19:06.657 | 1.00th=[ 5407], 5.00th=[ 7111], 10.00th=[ 7635], 20.00th=[ 8586], 00:19:06.657 | 30.00th=[10421], 40.00th=[15270], 50.00th=[16581], 60.00th=[18744], 00:19:06.657 | 70.00th=[26346], 80.00th=[30540], 90.00th=[33817], 95.00th=[38536], 00:19:06.657 | 99.00th=[49021], 99.50th=[58983], 99.90th=[73925], 99.95th=[73925], 00:19:06.657 | 99.99th=[73925] 00:19:06.657 write: IOPS=3685, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1009msec); 0 zone resets 00:19:06.657 slat (nsec): min=1542, max=18981k, avg=120280.40, stdev=937121.88 00:19:06.657 clat (usec): min=1215, max=74145, avg=15518.00, stdev=11458.99 00:19:06.657 lat (usec): min=1226, max=74155, avg=15638.28, stdev=11551.45 00:19:06.657 clat percentiles (usec): 00:19:06.657 | 1.00th=[ 2933], 5.00th=[ 4359], 10.00th=[ 6521], 20.00th=[ 7570], 00:19:06.657 | 30.00th=[ 8291], 40.00th=[11600], 50.00th=[12780], 60.00th=[13173], 00:19:06.657 | 70.00th=[15008], 80.00th=[21627], 90.00th=[29754], 95.00th=[33424], 00:19:06.657 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:19:06.657 | 99.99th=[73925] 00:19:06.657 bw ( KiB/s): min= 8312, max=20480, per=18.64%, avg=14396.00, stdev=8604.08, samples=2 00:19:06.657 iops : min= 2078, max= 5120, avg=3599.00, stdev=2151.02, samples=2 00:19:06.657 lat (msec) : 2=0.26%, 4=1.26%, 10=30.73%, 20=36.03%, 50=30.21% 00:19:06.657 lat (msec) : 100=1.52% 00:19:06.657 cpu : usr=3.87%, sys=2.78%, ctx=324, majf=0, minf=1 00:19:06.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:06.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.657 issued rwts: total=3584,3719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.657 job3: (groupid=0, jobs=1): err= 0: pid=411461: Sat Jun 8 00:44:24 2024 00:19:06.657 read: IOPS=3328, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1005msec) 00:19:06.657 slat (nsec): min=969, max=19781k, avg=123805.54, stdev=930130.65 00:19:06.657 clat (usec): min=1392, max=55823, avg=14378.35, stdev=8093.33 00:19:06.657 lat (usec): min=1413, max=55832, avg=14502.16, stdev=8189.87 00:19:06.657 clat percentiles (usec): 00:19:06.657 | 1.00th=[ 3425], 5.00th=[ 5866], 10.00th=[ 6849], 20.00th=[ 7701], 00:19:06.657 | 30.00th=[ 9896], 40.00th=[12256], 50.00th=[13829], 60.00th=[14222], 00:19:06.657 | 70.00th=[14746], 80.00th=[18482], 90.00th=[23200], 95.00th=[29230], 00:19:06.657 | 99.00th=[48497], 99.50th=[50594], 99.90th=[55837], 99.95th=[55837], 00:19:06.657 | 99.99th=[55837] 00:19:06.657 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:19:06.657 slat (nsec): min=1575, max=10761k, avg=148666.20, stdev=881802.96 00:19:06.657 clat (usec): min=1186, max=106271, avg=22092.98, stdev=26178.09 00:19:06.657 lat (usec): min=1200, max=106278, avg=22241.64, stdev=26363.16 00:19:06.657 clat percentiles (msec): 00:19:06.657 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 5], 20.00th=[ 6], 00:19:06.657 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:19:06.657 | 70.00th=[ 15], 80.00th=[ 45], 90.00th=[ 64], 95.00th=[ 91], 00:19:06.657 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 107], 00:19:06.657 | 99.99th=[ 107] 00:19:06.657 bw ( KiB/s): min= 9056, max=19616, per=18.56%, avg=14336.00, stdev=7467.05, samples=2 00:19:06.657 iops : min= 2264, max= 4904, avg=3584.00, stdev=1866.76, samples=2 00:19:06.657 lat (msec) : 2=0.56%, 4=4.89%, 10=37.87%, 20=33.70%, 50=14.69% 00:19:06.657 lat (msec) : 100=7.37%, 250=0.91% 00:19:06.657 cpu : usr=2.19%, sys=4.48%, ctx=312, majf=0, minf=1 00:19:06.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:06.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.657 issued rwts: total=3345,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.657 00:19:06.657 Run status group 0 (all jobs): 00:19:06.657 READ: bw=70.7MiB/s (74.1MB/s), 9.88MiB/s-34.3MiB/s (10.4MB/s-36.0MB/s), io=71.6MiB (75.0MB), run=1005-1012msec 00:19:06.657 WRITE: bw=75.4MiB/s (79.1MB/s), 11.7MiB/s-35.8MiB/s (12.2MB/s-37.6MB/s), io=76.3MiB (80.0MB), run=1005-1012msec 00:19:06.657 00:19:06.657 Disk stats (read/write): 00:19:06.657 nvme0n1: ios=2601/2599, merge=0/0, ticks=42763/59375, in_queue=102138, util=99.40% 00:19:06.657 nvme0n2: ios=7209/7228, merge=0/0, ticks=46947/50241, in_queue=97188, util=99.29% 00:19:06.658 nvme0n3: ios=3072/3514, merge=0/0, ticks=29220/24628, in_queue=53848, util=88.21% 00:19:06.658 nvme0n4: ios=2598/2935, merge=0/0, ticks=33900/61805, in_queue=95705, util=96.06% 00:19:06.658 00:44:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:06.658 00:44:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=411557 00:19:06.658 00:44:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:06.658 00:44:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:06.658 [global] 00:19:06.658 thread=1 00:19:06.658 invalidate=1 00:19:06.658 rw=read 00:19:06.658 time_based=1 00:19:06.658 runtime=10 00:19:06.658 ioengine=libaio 00:19:06.658 direct=1 00:19:06.658 bs=4096 00:19:06.658 iodepth=1 00:19:06.658 norandommap=1 00:19:06.658 numjobs=1 00:19:06.658 00:19:06.658 [job0] 00:19:06.658 filename=/dev/nvme0n1 00:19:06.658 [job1] 00:19:06.658 filename=/dev/nvme0n2 00:19:06.658 [job2] 00:19:06.658 filename=/dev/nvme0n3 00:19:06.658 [job3] 00:19:06.658 filename=/dev/nvme0n4 00:19:06.658 Could not set queue depth (nvme0n1) 00:19:06.658 Could not set queue depth (nvme0n2) 00:19:06.658 Could not set queue depth (nvme0n3) 00:19:06.658 Could not set queue depth (nvme0n4) 00:19:06.924 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.924 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.924 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.924 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.924 fio-3.35 00:19:06.924 Starting 4 threads 00:19:09.464 00:44:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:09.724 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3678208, buflen=4096 00:19:09.724 fio: pid=411955, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:09.724 00:44:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:09.724 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=10264576, buflen=4096 00:19:09.724 fio: pid=411948, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:09.724 00:44:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:09.724 00:44:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:09.984 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=282624, buflen=4096 00:19:09.984 fio: pid=411913, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:09.984 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:09.984 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:10.245 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:10.245 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:10.245 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=348160, buflen=4096 00:19:10.245 fio: pid=411930, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:10.245 00:19:10.245 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=411913: Sat Jun 8 00:44:28 2024 00:19:10.245 read: IOPS=23, BW=94.1KiB/s (96.4kB/s)(276KiB/2933msec) 00:19:10.245 slat (usec): min=23, max=13590, avg=221.29, stdev=1621.25 00:19:10.245 clat (usec): min=41394, max=42196, avg=41960.92, stdev=95.50 00:19:10.246 lat (usec): min=41863, max=54984, avg=42185.06, stdev=1564.65 00:19:10.246 clat percentiles (usec): 00:19:10.246 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:19:10.246 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:10.246 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:10.246 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:10.246 | 99.99th=[42206] 00:19:10.246 bw ( KiB/s): min= 88, max= 96, per=2.06%, avg=94.40, stdev= 3.58, samples=5 00:19:10.246 iops : min= 22, max= 24, avg=23.60, stdev= 0.89, samples=5 00:19:10.246 lat (msec) : 50=98.57% 00:19:10.246 cpu : usr=0.00%, sys=0.10%, ctx=72, majf=0, minf=1 00:19:10.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.246 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.246 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.246 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=411930: Sat Jun 8 00:44:28 2024 00:19:10.246 read: IOPS=27, BW=109KiB/s (112kB/s)(340KiB/3115msec) 00:19:10.246 slat (usec): min=6, max=15655, avg=246.90, stdev=1719.92 00:19:10.246 clat (usec): min=609, max=42624, avg=36140.15, stdev=14370.95 00:19:10.246 lat (usec): min=627, max=57052, avg=36349.99, stdev=14539.59 00:19:10.246 clat percentiles (usec): 00:19:10.246 | 1.00th=[ 611], 5.00th=[ 775], 10.00th=[ 1188], 20.00th=[41681], 00:19:10.246 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:19:10.246 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:10.246 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:10.246 | 99.99th=[42730] 00:19:10.246 bw ( KiB/s): min= 88, max= 184, per=2.39%, avg=109.33, stdev=36.72, samples=6 00:19:10.246 iops : min= 22, max= 46, avg=27.33, stdev= 9.18, samples=6 00:19:10.246 lat (usec) : 750=4.65%, 1000=4.65% 00:19:10.246 lat (msec) : 2=4.65%, 50=84.88% 00:19:10.246 cpu : usr=0.16%, sys=0.00%, ctx=90, majf=0, minf=1 00:19:10.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.246 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.246 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.246 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=411948: Sat Jun 8 00:44:28 2024 00:19:10.246 read: IOPS=908, BW=3632KiB/s (3719kB/s)(9.79MiB/2760msec) 00:19:10.246 slat (usec): min=6, max=19674, avg=32.87, stdev=392.49 00:19:10.246 clat (usec): min=463, max=43549, avg=1053.71, stdev=1454.99 00:19:10.246 lat (usec): min=490, max=61992, avg=1086.58, stdev=1708.66 00:19:10.246 clat percentiles (usec): 00:19:10.246 | 1.00th=[ 570], 5.00th=[ 635], 10.00th=[ 693], 20.00th=[ 766], 00:19:10.246 | 30.00th=[ 832], 40.00th=[ 1057], 50.00th=[ 1106], 60.00th=[ 1139], 00:19:10.246 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1254], 00:19:10.246 | 99.00th=[ 1319], 99.50th=[ 1369], 99.90th=[42206], 99.95th=[42206], 00:19:10.246 | 99.99th=[43779] 00:19:10.246 bw ( KiB/s): min= 3392, max= 4752, per=85.19%, avg=3892.80, stdev=664.09, samples=5 00:19:10.246 iops : min= 848, max= 1188, avg=973.20, stdev=166.02, samples=5 00:19:10.246 lat (usec) : 500=0.28%, 750=17.87%, 1000=19.07% 00:19:10.246 lat (msec) : 2=62.62%, 50=0.12% 00:19:10.246 cpu : usr=1.63%, sys=3.33%, ctx=2509, majf=0, minf=1 00:19:10.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.246 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.246 issued rwts: total=2507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.246 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=411955: Sat Jun 8 00:44:28 2024 00:19:10.246 read: IOPS=348, BW=1391KiB/s (1424kB/s)(3592KiB/2583msec) 00:19:10.246 slat (nsec): min=6141, max=59826, avg=24988.16, stdev=5783.90 00:19:10.246 clat (usec): min=461, max=42778, avg=2816.84, stdev=8333.17 00:19:10.246 lat (usec): min=487, max=42804, avg=2841.83, stdev=8333.27 00:19:10.246 clat percentiles (usec): 00:19:10.246 | 1.00th=[ 529], 5.00th=[ 603], 10.00th=[ 668], 20.00th=[ 766], 00:19:10.246 | 30.00th=[ 840], 40.00th=[ 1106], 50.00th=[ 1172], 60.00th=[ 1205], 00:19:10.246 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1319], 95.00th=[ 1401], 00:19:10.246 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:19:10.246 | 99.99th=[42730] 00:19:10.246 bw ( KiB/s): min= 104, max= 3928, per=28.96%, avg=1323.20, stdev=1752.17, samples=5 00:19:10.246 iops : min= 26, max= 982, avg=330.80, stdev=438.04, samples=5 00:19:10.246 lat (usec) : 500=0.56%, 750=18.13%, 1000=14.79% 00:19:10.246 lat (msec) : 2=62.07%, 50=4.34% 00:19:10.246 cpu : usr=0.46%, sys=1.39%, ctx=899, majf=0, minf=2 00:19:10.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.246 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.246 issued rwts: total=899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.246 00:19:10.246 Run status group 0 (all jobs): 00:19:10.246 READ: bw=4569KiB/s (4679kB/s), 94.1KiB/s-3632KiB/s (96.4kB/s-3719kB/s), io=13.9MiB (14.6MB), run=2583-3115msec 00:19:10.246 00:19:10.246 Disk stats (read/write): 00:19:10.246 nvme0n1: ios=67/0, merge=0/0, ticks=2813/0, in_queue=2813, util=94.39% 00:19:10.246 nvme0n2: ios=84/0, merge=0/0, ticks=3032/0, in_queue=3032, util=95.23% 00:19:10.246 nvme0n3: ios=2501/0, merge=0/0, ticks=2275/0, in_queue=2275, util=96.03% 00:19:10.246 nvme0n4: ios=605/0, merge=0/0, ticks=2254/0, in_queue=2254, util=96.02% 00:19:10.246 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:10.246 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:10.507 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:10.507 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:10.768 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:10.768 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:10.768 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:10.768 00:44:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 411557 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:11.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:11.029 nvmf hotplug test: fio failed as expected 00:19:11.029 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.290 rmmod nvme_tcp 00:19:11.290 rmmod nvme_fabrics 00:19:11.290 rmmod nvme_keyring 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 408115 ']' 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 408115 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 408115 ']' 00:19:11.290 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 408115 00:19:11.291 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:19:11.291 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:11.291 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 408115 00:19:11.291 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:11.291 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:11.291 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 408115' 00:19:11.291 killing process with pid 408115 00:19:11.291 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 408115 00:19:11.291 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 408115 00:19:11.550 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:11.550 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:11.550 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:11.550 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.550 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:11.550 00:44:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.550 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.550 00:44:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.096 00:44:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:14.096 00:19:14.096 real 0m28.148s 00:19:14.096 user 2m40.862s 00:19:14.096 sys 0m8.834s 00:19:14.096 00:44:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:14.096 00:44:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.096 ************************************ 00:19:14.096 END TEST nvmf_fio_target 00:19:14.096 ************************************ 00:19:14.096 00:44:31 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:14.096 00:44:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:14.096 00:44:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:14.096 00:44:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:14.096 ************************************ 00:19:14.096 START TEST nvmf_bdevio 00:19:14.096 ************************************ 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:14.096 * Looking for test storage... 00:19:14.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.096 00:44:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:20.698 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:20.698 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:20.698 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:20.698 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:20.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:19:20.698 00:19:20.698 --- 10.0.0.2 ping statistics --- 00:19:20.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.698 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:19:20.698 00:19:20.698 --- 10.0.0.1 ping statistics --- 00:19:20.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.698 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.698 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=416834 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 416834 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 416834 ']' 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:20.699 00:44:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:20.699 [2024-06-08 00:44:38.903250] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:19:20.699 [2024-06-08 00:44:38.903312] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.699 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.960 [2024-06-08 00:44:38.991049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.960 [2024-06-08 00:44:39.075588] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.960 [2024-06-08 00:44:39.075632] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.960 [2024-06-08 00:44:39.075640] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.960 [2024-06-08 00:44:39.075647] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.960 [2024-06-08 00:44:39.075653] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.960 [2024-06-08 00:44:39.075809] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:19:20.960 [2024-06-08 00:44:39.075943] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:19:20.960 [2024-06-08 00:44:39.076100] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.960 [2024-06-08 00:44:39.076101] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.533 [2024-06-08 00:44:39.740309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.533 Malloc0 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.533 [2024-06-08 00:44:39.789777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:21.533 { 00:19:21.533 "params": { 00:19:21.533 "name": "Nvme$subsystem", 00:19:21.533 "trtype": "$TEST_TRANSPORT", 00:19:21.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.533 "adrfam": "ipv4", 00:19:21.533 "trsvcid": "$NVMF_PORT", 00:19:21.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.533 "hdgst": ${hdgst:-false}, 00:19:21.533 "ddgst": ${ddgst:-false} 00:19:21.533 }, 00:19:21.533 "method": "bdev_nvme_attach_controller" 00:19:21.533 } 00:19:21.533 EOF 00:19:21.533 )") 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:21.533 00:44:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:21.533 "params": { 00:19:21.533 "name": "Nvme1", 00:19:21.533 "trtype": "tcp", 00:19:21.533 "traddr": "10.0.0.2", 00:19:21.533 "adrfam": "ipv4", 00:19:21.533 "trsvcid": "4420", 00:19:21.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.533 "hdgst": false, 00:19:21.533 "ddgst": false 00:19:21.533 }, 00:19:21.533 "method": "bdev_nvme_attach_controller" 00:19:21.533 }' 00:19:21.795 [2024-06-08 00:44:39.845198] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:19:21.795 [2024-06-08 00:44:39.845261] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417056 ] 00:19:21.795 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.795 [2024-06-08 00:44:39.908641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.795 [2024-06-08 00:44:39.983854] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.795 [2024-06-08 00:44:39.983972] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.795 [2024-06-08 00:44:39.983975] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.055 I/O targets: 00:19:22.056 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:22.056 00:19:22.056 00:19:22.056 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.056 http://cunit.sourceforge.net/ 00:19:22.056 00:19:22.056 00:19:22.056 Suite: bdevio tests on: Nvme1n1 00:19:22.056 Test: blockdev write read block ...passed 00:19:22.056 Test: blockdev write zeroes read block ...passed 00:19:22.056 Test: blockdev write zeroes read no split ...passed 00:19:22.056 Test: blockdev write zeroes read split ...passed 00:19:22.056 Test: blockdev write zeroes read split partial ...passed 00:19:22.056 Test: blockdev reset ...[2024-06-08 00:44:40.300624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.056 [2024-06-08 00:44:40.300681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202a400 (9): Bad file descriptor 00:19:22.316 [2024-06-08 00:44:40.451695] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.316 passed 00:19:22.316 Test: blockdev write read 8 blocks ...passed 00:19:22.316 Test: blockdev write read size > 128k ...passed 00:19:22.316 Test: blockdev write read invalid size ...passed 00:19:22.316 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.316 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.316 Test: blockdev write read max offset ...passed 00:19:22.316 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.316 Test: blockdev writev readv 8 blocks ...passed 00:19:22.316 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.577 Test: blockdev writev readv block ...passed 00:19:22.577 Test: blockdev writev readv size > 128k ...passed 00:19:22.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.577 Test: blockdev comparev and writev ...[2024-06-08 00:44:40.674076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.577 [2024-06-08 00:44:40.674100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:22.577 [2024-06-08 00:44:40.674111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.577 [2024-06-08 00:44:40.674117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.577 [2024-06-08 00:44:40.674489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.577 [2024-06-08 00:44:40.674497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:22.577 [2024-06-08 00:44:40.674507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.577 [2024-06-08 00:44:40.674512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:22.577 [2024-06-08 00:44:40.674884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.577 [2024-06-08 00:44:40.674891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:22.577 [2024-06-08 00:44:40.674900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.577 [2024-06-08 00:44:40.674905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:22.577 [2024-06-08 00:44:40.675301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.577 [2024-06-08 00:44:40.675308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:22.577 [2024-06-08 00:44:40.675318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.578 [2024-06-08 00:44:40.675323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:22.578 passed 00:19:22.578 Test: blockdev nvme passthru rw ...passed 00:19:22.578 Test: blockdev nvme passthru vendor specific ...[2024-06-08 00:44:40.759856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.578 [2024-06-08 00:44:40.759865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:22.578 [2024-06-08 00:44:40.760080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.578 [2024-06-08 00:44:40.760086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:22.578 [2024-06-08 00:44:40.760299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.578 [2024-06-08 00:44:40.760306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:22.578 [2024-06-08 00:44:40.760518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.578 [2024-06-08 00:44:40.760527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:22.578 passed 00:19:22.578 Test: blockdev nvme admin passthru ...passed 00:19:22.578 Test: blockdev copy ...passed 00:19:22.578 00:19:22.578 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.578 suites 1 1 n/a 0 0 00:19:22.578 tests 23 23 23 0 0 00:19:22.578 asserts 152 152 152 0 n/a 00:19:22.578 00:19:22.578 Elapsed time = 1.364 seconds 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:22.839 00:44:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:22.839 rmmod nvme_tcp 00:19:22.839 rmmod nvme_fabrics 00:19:22.839 rmmod nvme_keyring 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 416834 ']' 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 416834 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 416834 ']' 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 416834 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 416834 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 416834' 00:19:22.839 killing process with pid 416834 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 416834 00:19:22.839 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 416834 00:19:23.100 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.100 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.100 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.100 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.100 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.100 00:44:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.100 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.100 00:44:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.013 00:44:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:25.013 00:19:25.013 real 0m11.429s 00:19:25.013 user 0m12.588s 00:19:25.013 sys 0m5.651s 00:19:25.013 00:44:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:25.013 00:44:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:25.013 ************************************ 00:19:25.013 END TEST nvmf_bdevio 00:19:25.013 ************************************ 00:19:25.274 00:44:43 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:25.274 00:44:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:25.274 00:44:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:25.274 00:44:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:25.274 ************************************ 00:19:25.274 START TEST nvmf_auth_target 00:19:25.274 ************************************ 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:25.274 * Looking for test storage... 00:19:25.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.274 00:44:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:25.275 00:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.866 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:31.867 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:31.867 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:31.867 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:31.867 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.867 00:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.867 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.867 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.867 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:31.867 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:32.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:19:32.128 00:19:32.128 --- 10.0.0.2 ping statistics --- 00:19:32.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.128 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:19:32.128 00:19:32.128 --- 10.0.0.1 ping statistics --- 00:19:32.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.128 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=421376 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 421376 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 421376 ']' 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.128 00:44:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=421431 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=130d2c82d843e37c42c1607f108ce5a7f6604f18cb408cf0 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.SLB 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 130d2c82d843e37c42c1607f108ce5a7f6604f18cb408cf0 0 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 130d2c82d843e37c42c1607f108ce5a7f6604f18cb408cf0 0 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=130d2c82d843e37c42c1607f108ce5a7f6604f18cb408cf0 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.SLB 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.SLB 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.SLB 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3ca3ba5d1146b7fac411622f5f889a9d740268546583760d9575c4c74c21afac 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XK4 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3ca3ba5d1146b7fac411622f5f889a9d740268546583760d9575c4c74c21afac 3 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3ca3ba5d1146b7fac411622f5f889a9d740268546583760d9575c4c74c21afac 3 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3ca3ba5d1146b7fac411622f5f889a9d740268546583760d9575c4c74c21afac 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XK4 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XK4 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.XK4 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=679f7de6e3a7c8e01e9c673603bd0902 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Lyp 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 679f7de6e3a7c8e01e9c673603bd0902 1 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 679f7de6e3a7c8e01e9c673603bd0902 1 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=679f7de6e3a7c8e01e9c673603bd0902 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Lyp 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Lyp 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Lyp 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aae6e55931c47c1053503d9699b81c3f0856071f51c60c35 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZZ2 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aae6e55931c47c1053503d9699b81c3f0856071f51c60c35 2 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aae6e55931c47c1053503d9699b81c3f0856071f51c60c35 2 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aae6e55931c47c1053503d9699b81c3f0856071f51c60c35 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZZ2 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZZ2 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ZZ2 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.070 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:33.071 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:33.071 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.071 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e21232f389e28d95affc4b656b82bb153b2c9b0f01bf74bf 00:19:33.071 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:33.331 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ClC 00:19:33.331 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e21232f389e28d95affc4b656b82bb153b2c9b0f01bf74bf 2 00:19:33.331 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e21232f389e28d95affc4b656b82bb153b2c9b0f01bf74bf 2 00:19:33.331 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e21232f389e28d95affc4b656b82bb153b2c9b0f01bf74bf 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ClC 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ClC 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ClC 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cc452020b0c860f4e91106e10c51137e 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uI0 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cc452020b0c860f4e91106e10c51137e 1 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cc452020b0c860f4e91106e10c51137e 1 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cc452020b0c860f4e91106e10c51137e 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uI0 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uI0 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.uI0 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9ac4ea56c032e13207515cd3737ffd8c35f976723e3077dc11da50cb0d0989c2 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Hvt 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9ac4ea56c032e13207515cd3737ffd8c35f976723e3077dc11da50cb0d0989c2 3 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9ac4ea56c032e13207515cd3737ffd8c35f976723e3077dc11da50cb0d0989c2 3 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9ac4ea56c032e13207515cd3737ffd8c35f976723e3077dc11da50cb0d0989c2 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Hvt 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Hvt 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Hvt 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 421376 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 421376 ']' 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:33.332 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 421431 /var/tmp/host.sock 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 421431 ']' 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:33.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SLB 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.SLB 00:19:33.593 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.SLB 00:19:33.854 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.XK4 ]] 00:19:33.854 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XK4 00:19:33.854 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.854 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.854 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.854 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XK4 00:19:33.854 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XK4 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Lyp 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Lyp 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Lyp 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ZZ2 ]] 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZZ2 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZZ2 00:19:34.114 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZZ2 00:19:34.374 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:34.374 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ClC 00:19:34.374 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.374 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.374 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.374 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ClC 00:19:34.374 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ClC 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.uI0 ]] 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uI0 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uI0 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uI0 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Hvt 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Hvt 00:19:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Hvt 00:19:34.899 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:34.899 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:34.899 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.899 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.899 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.899 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.899 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.163 00:19:35.163 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.163 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.163 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.424 { 00:19:35.424 "cntlid": 1, 00:19:35.424 "qid": 0, 00:19:35.424 "state": "enabled", 00:19:35.424 "listen_address": { 00:19:35.424 "trtype": "TCP", 00:19:35.424 "adrfam": "IPv4", 00:19:35.424 "traddr": "10.0.0.2", 00:19:35.424 "trsvcid": "4420" 00:19:35.424 }, 00:19:35.424 "peer_address": { 00:19:35.424 "trtype": "TCP", 00:19:35.424 "adrfam": "IPv4", 00:19:35.424 "traddr": "10.0.0.1", 00:19:35.424 "trsvcid": "40004" 00:19:35.424 }, 00:19:35.424 "auth": { 00:19:35.424 "state": "completed", 00:19:35.424 "digest": "sha256", 00:19:35.424 "dhgroup": "null" 00:19:35.424 } 00:19:35.424 } 00:19:35.424 ]' 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.424 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.684 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:19:36.254 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.254 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.254 00:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.254 00:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.254 00:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.254 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.254 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.254 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.515 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.776 00:19:36.776 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.776 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.776 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.776 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.776 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.776 00:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.776 00:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.776 00:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.776 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.776 { 00:19:36.776 "cntlid": 3, 00:19:36.776 "qid": 0, 00:19:36.776 "state": "enabled", 00:19:36.776 "listen_address": { 00:19:36.776 "trtype": "TCP", 00:19:36.776 "adrfam": "IPv4", 00:19:36.776 "traddr": "10.0.0.2", 00:19:36.776 "trsvcid": "4420" 00:19:36.776 }, 00:19:36.776 "peer_address": { 00:19:36.776 "trtype": "TCP", 00:19:36.776 "adrfam": "IPv4", 00:19:36.776 "traddr": "10.0.0.1", 00:19:36.776 "trsvcid": "40034" 00:19:36.776 }, 00:19:36.776 "auth": { 00:19:36.776 "state": "completed", 00:19:36.776 "digest": "sha256", 00:19:36.776 "dhgroup": "null" 00:19:36.776 } 00:19:36.776 } 00:19:36.776 ]' 00:19:36.776 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.037 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.037 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.037 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:37.037 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.037 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.037 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.037 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.298 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:19:37.869 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.869 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.869 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.869 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.869 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.869 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.869 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.869 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.161 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.161 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.423 { 00:19:38.423 "cntlid": 5, 00:19:38.423 "qid": 0, 00:19:38.423 "state": "enabled", 00:19:38.423 "listen_address": { 00:19:38.423 "trtype": "TCP", 00:19:38.423 "adrfam": "IPv4", 00:19:38.423 "traddr": "10.0.0.2", 00:19:38.423 "trsvcid": "4420" 00:19:38.423 }, 00:19:38.423 "peer_address": { 00:19:38.423 "trtype": "TCP", 00:19:38.423 "adrfam": "IPv4", 00:19:38.423 "traddr": "10.0.0.1", 00:19:38.423 "trsvcid": "40056" 00:19:38.423 }, 00:19:38.423 "auth": { 00:19:38.423 "state": "completed", 00:19:38.423 "digest": "sha256", 00:19:38.423 "dhgroup": "null" 00:19:38.423 } 00:19:38.423 } 00:19:38.423 ]' 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.423 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.684 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:38.685 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.685 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.685 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.685 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.685 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.627 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.888 00:19:39.888 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.888 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.888 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.150 { 00:19:40.150 "cntlid": 7, 00:19:40.150 "qid": 0, 00:19:40.150 "state": "enabled", 00:19:40.150 "listen_address": { 00:19:40.150 "trtype": "TCP", 00:19:40.150 "adrfam": "IPv4", 00:19:40.150 "traddr": "10.0.0.2", 00:19:40.150 "trsvcid": "4420" 00:19:40.150 }, 00:19:40.150 "peer_address": { 00:19:40.150 "trtype": "TCP", 00:19:40.150 "adrfam": "IPv4", 00:19:40.150 "traddr": "10.0.0.1", 00:19:40.150 "trsvcid": "40076" 00:19:40.150 }, 00:19:40.150 "auth": { 00:19:40.150 "state": "completed", 00:19:40.150 "digest": "sha256", 00:19:40.150 "dhgroup": "null" 00:19:40.150 } 00:19:40.150 } 00:19:40.150 ]' 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.150 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.411 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:19:41.354 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.355 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.615 00:19:41.615 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.615 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.615 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.615 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.615 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.615 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.615 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.615 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.615 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.615 { 00:19:41.615 "cntlid": 9, 00:19:41.615 "qid": 0, 00:19:41.615 "state": "enabled", 00:19:41.615 "listen_address": { 00:19:41.616 "trtype": "TCP", 00:19:41.616 "adrfam": "IPv4", 00:19:41.616 "traddr": "10.0.0.2", 00:19:41.616 "trsvcid": "4420" 00:19:41.616 }, 00:19:41.616 "peer_address": { 00:19:41.616 "trtype": "TCP", 00:19:41.616 "adrfam": "IPv4", 00:19:41.616 "traddr": "10.0.0.1", 00:19:41.616 "trsvcid": "40110" 00:19:41.616 }, 00:19:41.616 "auth": { 00:19:41.616 "state": "completed", 00:19:41.616 "digest": "sha256", 00:19:41.616 "dhgroup": "ffdhe2048" 00:19:41.616 } 00:19:41.616 } 00:19:41.616 ]' 00:19:41.616 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.877 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.877 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.877 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.877 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.877 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.877 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.877 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.877 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:19:42.819 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.819 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.820 00:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.820 00:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.820 00:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.820 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.820 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.820 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.820 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.081 00:19:43.081 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.081 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.081 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.342 { 00:19:43.342 "cntlid": 11, 00:19:43.342 "qid": 0, 00:19:43.342 "state": "enabled", 00:19:43.342 "listen_address": { 00:19:43.342 "trtype": "TCP", 00:19:43.342 "adrfam": "IPv4", 00:19:43.342 "traddr": "10.0.0.2", 00:19:43.342 "trsvcid": "4420" 00:19:43.342 }, 00:19:43.342 "peer_address": { 00:19:43.342 "trtype": "TCP", 00:19:43.342 "adrfam": "IPv4", 00:19:43.342 "traddr": "10.0.0.1", 00:19:43.342 "trsvcid": "40140" 00:19:43.342 }, 00:19:43.342 "auth": { 00:19:43.342 "state": "completed", 00:19:43.342 "digest": "sha256", 00:19:43.342 "dhgroup": "ffdhe2048" 00:19:43.342 } 00:19:43.342 } 00:19:43.342 ]' 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.342 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.604 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.547 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.548 00:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.548 00:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.548 00:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.548 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.548 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.808 00:19:44.808 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.808 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.808 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.069 { 00:19:45.069 "cntlid": 13, 00:19:45.069 "qid": 0, 00:19:45.069 "state": "enabled", 00:19:45.069 "listen_address": { 00:19:45.069 "trtype": "TCP", 00:19:45.069 "adrfam": "IPv4", 00:19:45.069 "traddr": "10.0.0.2", 00:19:45.069 "trsvcid": "4420" 00:19:45.069 }, 00:19:45.069 "peer_address": { 00:19:45.069 "trtype": "TCP", 00:19:45.069 "adrfam": "IPv4", 00:19:45.069 "traddr": "10.0.0.1", 00:19:45.069 "trsvcid": "33622" 00:19:45.069 }, 00:19:45.069 "auth": { 00:19:45.069 "state": "completed", 00:19:45.069 "digest": "sha256", 00:19:45.069 "dhgroup": "ffdhe2048" 00:19:45.069 } 00:19:45.069 } 00:19:45.069 ]' 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.069 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.330 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.273 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.534 00:19:46.534 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.534 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.534 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.534 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.534 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.534 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.534 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.796 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.796 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.796 { 00:19:46.796 "cntlid": 15, 00:19:46.796 "qid": 0, 00:19:46.796 "state": "enabled", 00:19:46.796 "listen_address": { 00:19:46.796 "trtype": "TCP", 00:19:46.796 "adrfam": "IPv4", 00:19:46.796 "traddr": "10.0.0.2", 00:19:46.796 "trsvcid": "4420" 00:19:46.796 }, 00:19:46.796 "peer_address": { 00:19:46.796 "trtype": "TCP", 00:19:46.796 "adrfam": "IPv4", 00:19:46.796 "traddr": "10.0.0.1", 00:19:46.796 "trsvcid": "33650" 00:19:46.796 }, 00:19:46.796 "auth": { 00:19:46.796 "state": "completed", 00:19:46.796 "digest": "sha256", 00:19:46.796 "dhgroup": "ffdhe2048" 00:19:46.796 } 00:19:46.796 } 00:19:46.796 ]' 00:19:46.796 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.796 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.796 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.796 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.796 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.796 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.796 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.796 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.056 00:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:19:47.628 00:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.628 00:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.628 00:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.628 00:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.628 00:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.628 00:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.628 00:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.628 00:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.628 00:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.889 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.150 00:19:48.150 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.150 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.150 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.410 { 00:19:48.410 "cntlid": 17, 00:19:48.410 "qid": 0, 00:19:48.410 "state": "enabled", 00:19:48.410 "listen_address": { 00:19:48.410 "trtype": "TCP", 00:19:48.410 "adrfam": "IPv4", 00:19:48.410 "traddr": "10.0.0.2", 00:19:48.410 "trsvcid": "4420" 00:19:48.410 }, 00:19:48.410 "peer_address": { 00:19:48.410 "trtype": "TCP", 00:19:48.410 "adrfam": "IPv4", 00:19:48.410 "traddr": "10.0.0.1", 00:19:48.410 "trsvcid": "33658" 00:19:48.410 }, 00:19:48.410 "auth": { 00:19:48.410 "state": "completed", 00:19:48.410 "digest": "sha256", 00:19:48.410 "dhgroup": "ffdhe3072" 00:19:48.410 } 00:19:48.410 } 00:19:48.410 ]' 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.410 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.671 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:19:49.242 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.242 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.242 00:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.242 00:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.242 00:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.242 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.242 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.242 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.502 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.763 00:19:49.763 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.763 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.763 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.024 { 00:19:50.024 "cntlid": 19, 00:19:50.024 "qid": 0, 00:19:50.024 "state": "enabled", 00:19:50.024 "listen_address": { 00:19:50.024 "trtype": "TCP", 00:19:50.024 "adrfam": "IPv4", 00:19:50.024 "traddr": "10.0.0.2", 00:19:50.024 "trsvcid": "4420" 00:19:50.024 }, 00:19:50.024 "peer_address": { 00:19:50.024 "trtype": "TCP", 00:19:50.024 "adrfam": "IPv4", 00:19:50.024 "traddr": "10.0.0.1", 00:19:50.024 "trsvcid": "33684" 00:19:50.024 }, 00:19:50.024 "auth": { 00:19:50.024 "state": "completed", 00:19:50.024 "digest": "sha256", 00:19:50.024 "dhgroup": "ffdhe3072" 00:19:50.024 } 00:19:50.024 } 00:19:50.024 ]' 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.024 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.285 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:19:50.855 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.855 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:50.855 00:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.855 00:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.115 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.116 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.376 00:19:51.376 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.376 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.376 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.636 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.636 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.636 00:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.636 00:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.636 00:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.636 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.636 { 00:19:51.636 "cntlid": 21, 00:19:51.636 "qid": 0, 00:19:51.636 "state": "enabled", 00:19:51.636 "listen_address": { 00:19:51.636 "trtype": "TCP", 00:19:51.636 "adrfam": "IPv4", 00:19:51.636 "traddr": "10.0.0.2", 00:19:51.636 "trsvcid": "4420" 00:19:51.636 }, 00:19:51.636 "peer_address": { 00:19:51.636 "trtype": "TCP", 00:19:51.636 "adrfam": "IPv4", 00:19:51.636 "traddr": "10.0.0.1", 00:19:51.636 "trsvcid": "33694" 00:19:51.636 }, 00:19:51.636 "auth": { 00:19:51.636 "state": "completed", 00:19:51.636 "digest": "sha256", 00:19:51.636 "dhgroup": "ffdhe3072" 00:19:51.636 } 00:19:51.636 } 00:19:51.636 ]' 00:19:51.636 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.636 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.637 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.637 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.637 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.637 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.637 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.637 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.897 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.840 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.148 00:19:53.148 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.148 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.148 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.148 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.148 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.148 00:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.148 00:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.148 00:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.148 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.148 { 00:19:53.148 "cntlid": 23, 00:19:53.148 "qid": 0, 00:19:53.148 "state": "enabled", 00:19:53.148 "listen_address": { 00:19:53.148 "trtype": "TCP", 00:19:53.148 "adrfam": "IPv4", 00:19:53.148 "traddr": "10.0.0.2", 00:19:53.148 "trsvcid": "4420" 00:19:53.148 }, 00:19:53.148 "peer_address": { 00:19:53.148 "trtype": "TCP", 00:19:53.148 "adrfam": "IPv4", 00:19:53.148 "traddr": "10.0.0.1", 00:19:53.148 "trsvcid": "33724" 00:19:53.148 }, 00:19:53.148 "auth": { 00:19:53.148 "state": "completed", 00:19:53.148 "digest": "sha256", 00:19:53.148 "dhgroup": "ffdhe3072" 00:19:53.148 } 00:19:53.148 } 00:19:53.148 ]' 00:19:53.148 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.441 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.441 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.441 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.441 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.441 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.441 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.441 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.441 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.383 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:54.384 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.384 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.384 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.384 00:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.384 00:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.384 00:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.384 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.384 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.643 00:19:54.644 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.644 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.644 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.904 { 00:19:54.904 "cntlid": 25, 00:19:54.904 "qid": 0, 00:19:54.904 "state": "enabled", 00:19:54.904 "listen_address": { 00:19:54.904 "trtype": "TCP", 00:19:54.904 "adrfam": "IPv4", 00:19:54.904 "traddr": "10.0.0.2", 00:19:54.904 "trsvcid": "4420" 00:19:54.904 }, 00:19:54.904 "peer_address": { 00:19:54.904 "trtype": "TCP", 00:19:54.904 "adrfam": "IPv4", 00:19:54.904 "traddr": "10.0.0.1", 00:19:54.904 "trsvcid": "33760" 00:19:54.904 }, 00:19:54.904 "auth": { 00:19:54.904 "state": "completed", 00:19:54.904 "digest": "sha256", 00:19:54.904 "dhgroup": "ffdhe4096" 00:19:54.904 } 00:19:54.904 } 00:19:54.904 ]' 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.904 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.164 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.105 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.366 00:19:56.366 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.366 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.366 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.626 { 00:19:56.626 "cntlid": 27, 00:19:56.626 "qid": 0, 00:19:56.626 "state": "enabled", 00:19:56.626 "listen_address": { 00:19:56.626 "trtype": "TCP", 00:19:56.626 "adrfam": "IPv4", 00:19:56.626 "traddr": "10.0.0.2", 00:19:56.626 "trsvcid": "4420" 00:19:56.626 }, 00:19:56.626 "peer_address": { 00:19:56.626 "trtype": "TCP", 00:19:56.626 "adrfam": "IPv4", 00:19:56.626 "traddr": "10.0.0.1", 00:19:56.626 "trsvcid": "43078" 00:19:56.626 }, 00:19:56.626 "auth": { 00:19:56.626 "state": "completed", 00:19:56.626 "digest": "sha256", 00:19:56.626 "dhgroup": "ffdhe4096" 00:19:56.626 } 00:19:56.626 } 00:19:56.626 ]' 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.626 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.886 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.828 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.088 00:19:58.088 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.088 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.088 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.349 { 00:19:58.349 "cntlid": 29, 00:19:58.349 "qid": 0, 00:19:58.349 "state": "enabled", 00:19:58.349 "listen_address": { 00:19:58.349 "trtype": "TCP", 00:19:58.349 "adrfam": "IPv4", 00:19:58.349 "traddr": "10.0.0.2", 00:19:58.349 "trsvcid": "4420" 00:19:58.349 }, 00:19:58.349 "peer_address": { 00:19:58.349 "trtype": "TCP", 00:19:58.349 "adrfam": "IPv4", 00:19:58.349 "traddr": "10.0.0.1", 00:19:58.349 "trsvcid": "43110" 00:19:58.349 }, 00:19:58.349 "auth": { 00:19:58.349 "state": "completed", 00:19:58.349 "digest": "sha256", 00:19:58.349 "dhgroup": "ffdhe4096" 00:19:58.349 } 00:19:58.349 } 00:19:58.349 ]' 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.349 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.350 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.350 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.610 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:19:59.180 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.180 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.180 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.180 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.180 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.180 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.180 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.181 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.441 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.702 00:19:59.702 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.702 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.702 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.963 { 00:19:59.963 "cntlid": 31, 00:19:59.963 "qid": 0, 00:19:59.963 "state": "enabled", 00:19:59.963 "listen_address": { 00:19:59.963 "trtype": "TCP", 00:19:59.963 "adrfam": "IPv4", 00:19:59.963 "traddr": "10.0.0.2", 00:19:59.963 "trsvcid": "4420" 00:19:59.963 }, 00:19:59.963 "peer_address": { 00:19:59.963 "trtype": "TCP", 00:19:59.963 "adrfam": "IPv4", 00:19:59.963 "traddr": "10.0.0.1", 00:19:59.963 "trsvcid": "43134" 00:19:59.963 }, 00:19:59.963 "auth": { 00:19:59.963 "state": "completed", 00:19:59.963 "digest": "sha256", 00:19:59.963 "dhgroup": "ffdhe4096" 00:19:59.963 } 00:19:59.963 } 00:19:59.963 ]' 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.963 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.223 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.166 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.426 00:20:01.426 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.426 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.426 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.687 { 00:20:01.687 "cntlid": 33, 00:20:01.687 "qid": 0, 00:20:01.687 "state": "enabled", 00:20:01.687 "listen_address": { 00:20:01.687 "trtype": "TCP", 00:20:01.687 "adrfam": "IPv4", 00:20:01.687 "traddr": "10.0.0.2", 00:20:01.687 "trsvcid": "4420" 00:20:01.687 }, 00:20:01.687 "peer_address": { 00:20:01.687 "trtype": "TCP", 00:20:01.687 "adrfam": "IPv4", 00:20:01.687 "traddr": "10.0.0.1", 00:20:01.687 "trsvcid": "43162" 00:20:01.687 }, 00:20:01.687 "auth": { 00:20:01.687 "state": "completed", 00:20:01.687 "digest": "sha256", 00:20:01.687 "dhgroup": "ffdhe6144" 00:20:01.687 } 00:20:01.687 } 00:20:01.687 ]' 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.687 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.947 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:20:02.889 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.889 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.889 00:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.889 00:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.889 00:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.889 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.889 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.889 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.889 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.150 00:20:03.410 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.410 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.410 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.410 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.410 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.410 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.410 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.410 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.411 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.411 { 00:20:03.411 "cntlid": 35, 00:20:03.411 "qid": 0, 00:20:03.411 "state": "enabled", 00:20:03.411 "listen_address": { 00:20:03.411 "trtype": "TCP", 00:20:03.411 "adrfam": "IPv4", 00:20:03.411 "traddr": "10.0.0.2", 00:20:03.411 "trsvcid": "4420" 00:20:03.411 }, 00:20:03.411 "peer_address": { 00:20:03.411 "trtype": "TCP", 00:20:03.411 "adrfam": "IPv4", 00:20:03.411 "traddr": "10.0.0.1", 00:20:03.411 "trsvcid": "43194" 00:20:03.411 }, 00:20:03.411 "auth": { 00:20:03.411 "state": "completed", 00:20:03.411 "digest": "sha256", 00:20:03.411 "dhgroup": "ffdhe6144" 00:20:03.411 } 00:20:03.411 } 00:20:03.411 ]' 00:20:03.411 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.411 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.411 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.671 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:03.672 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.672 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.672 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.672 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.672 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.614 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.185 00:20:05.185 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.185 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.185 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.185 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.185 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.185 00:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.185 00:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.185 00:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.185 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.185 { 00:20:05.186 "cntlid": 37, 00:20:05.186 "qid": 0, 00:20:05.186 "state": "enabled", 00:20:05.186 "listen_address": { 00:20:05.186 "trtype": "TCP", 00:20:05.186 "adrfam": "IPv4", 00:20:05.186 "traddr": "10.0.0.2", 00:20:05.186 "trsvcid": "4420" 00:20:05.186 }, 00:20:05.186 "peer_address": { 00:20:05.186 "trtype": "TCP", 00:20:05.186 "adrfam": "IPv4", 00:20:05.186 "traddr": "10.0.0.1", 00:20:05.186 "trsvcid": "51410" 00:20:05.186 }, 00:20:05.186 "auth": { 00:20:05.186 "state": "completed", 00:20:05.186 "digest": "sha256", 00:20:05.186 "dhgroup": "ffdhe6144" 00:20:05.186 } 00:20:05.186 } 00:20:05.186 ]' 00:20:05.186 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.186 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.186 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.186 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.186 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.446 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.446 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.446 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.446 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.387 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.647 00:20:06.647 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.647 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.647 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.908 { 00:20:06.908 "cntlid": 39, 00:20:06.908 "qid": 0, 00:20:06.908 "state": "enabled", 00:20:06.908 "listen_address": { 00:20:06.908 "trtype": "TCP", 00:20:06.908 "adrfam": "IPv4", 00:20:06.908 "traddr": "10.0.0.2", 00:20:06.908 "trsvcid": "4420" 00:20:06.908 }, 00:20:06.908 "peer_address": { 00:20:06.908 "trtype": "TCP", 00:20:06.908 "adrfam": "IPv4", 00:20:06.908 "traddr": "10.0.0.1", 00:20:06.908 "trsvcid": "51444" 00:20:06.908 }, 00:20:06.908 "auth": { 00:20:06.908 "state": "completed", 00:20:06.908 "digest": "sha256", 00:20:06.908 "dhgroup": "ffdhe6144" 00:20:06.908 } 00:20:06.908 } 00:20:06.908 ]' 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.908 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.168 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.168 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.168 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.169 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:20:08.111 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.112 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.711 00:20:08.711 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.711 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.711 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.971 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.971 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.971 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.971 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.971 00:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.972 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.972 { 00:20:08.972 "cntlid": 41, 00:20:08.972 "qid": 0, 00:20:08.972 "state": "enabled", 00:20:08.972 "listen_address": { 00:20:08.972 "trtype": "TCP", 00:20:08.972 "adrfam": "IPv4", 00:20:08.972 "traddr": "10.0.0.2", 00:20:08.972 "trsvcid": "4420" 00:20:08.972 }, 00:20:08.972 "peer_address": { 00:20:08.972 "trtype": "TCP", 00:20:08.972 "adrfam": "IPv4", 00:20:08.972 "traddr": "10.0.0.1", 00:20:08.972 "trsvcid": "51454" 00:20:08.972 }, 00:20:08.972 "auth": { 00:20:08.972 "state": "completed", 00:20:08.972 "digest": "sha256", 00:20:08.972 "dhgroup": "ffdhe8192" 00:20:08.972 } 00:20:08.972 } 00:20:08.972 ]' 00:20:08.972 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.972 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.972 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.972 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.972 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.972 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.972 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.972 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.233 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:20:09.805 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.805 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.805 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.805 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.805 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.805 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.805 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.805 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.066 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.638 00:20:10.638 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.638 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.638 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.638 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.638 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.638 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.638 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.638 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.638 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.638 { 00:20:10.638 "cntlid": 43, 00:20:10.638 "qid": 0, 00:20:10.638 "state": "enabled", 00:20:10.638 "listen_address": { 00:20:10.638 "trtype": "TCP", 00:20:10.638 "adrfam": "IPv4", 00:20:10.638 "traddr": "10.0.0.2", 00:20:10.638 "trsvcid": "4420" 00:20:10.638 }, 00:20:10.638 "peer_address": { 00:20:10.638 "trtype": "TCP", 00:20:10.638 "adrfam": "IPv4", 00:20:10.638 "traddr": "10.0.0.1", 00:20:10.638 "trsvcid": "51480" 00:20:10.638 }, 00:20:10.638 "auth": { 00:20:10.638 "state": "completed", 00:20:10.638 "digest": "sha256", 00:20:10.638 "dhgroup": "ffdhe8192" 00:20:10.638 } 00:20:10.638 } 00:20:10.638 ]' 00:20:10.638 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.898 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.898 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.898 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.898 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.898 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.898 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.898 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.159 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:20:11.731 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.731 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.731 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.731 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.731 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.731 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.731 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.731 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.993 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.565 00:20:12.565 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.565 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.565 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.565 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.565 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.565 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.565 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.565 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.565 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.565 { 00:20:12.565 "cntlid": 45, 00:20:12.565 "qid": 0, 00:20:12.565 "state": "enabled", 00:20:12.565 "listen_address": { 00:20:12.565 "trtype": "TCP", 00:20:12.565 "adrfam": "IPv4", 00:20:12.565 "traddr": "10.0.0.2", 00:20:12.565 "trsvcid": "4420" 00:20:12.565 }, 00:20:12.565 "peer_address": { 00:20:12.565 "trtype": "TCP", 00:20:12.565 "adrfam": "IPv4", 00:20:12.565 "traddr": "10.0.0.1", 00:20:12.565 "trsvcid": "51508" 00:20:12.565 }, 00:20:12.565 "auth": { 00:20:12.565 "state": "completed", 00:20:12.565 "digest": "sha256", 00:20:12.566 "dhgroup": "ffdhe8192" 00:20:12.566 } 00:20:12.566 } 00:20:12.566 ]' 00:20:12.566 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.826 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.826 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.826 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.826 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.826 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.826 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.826 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.087 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:20:13.658 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.658 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.658 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.658 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.658 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.658 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.658 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.658 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.918 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.490 00:20:14.490 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.490 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.490 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.490 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.490 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.490 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.490 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.490 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.490 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.490 { 00:20:14.490 "cntlid": 47, 00:20:14.490 "qid": 0, 00:20:14.490 "state": "enabled", 00:20:14.490 "listen_address": { 00:20:14.490 "trtype": "TCP", 00:20:14.490 "adrfam": "IPv4", 00:20:14.490 "traddr": "10.0.0.2", 00:20:14.490 "trsvcid": "4420" 00:20:14.490 }, 00:20:14.490 "peer_address": { 00:20:14.490 "trtype": "TCP", 00:20:14.490 "adrfam": "IPv4", 00:20:14.490 "traddr": "10.0.0.1", 00:20:14.490 "trsvcid": "51538" 00:20:14.490 }, 00:20:14.490 "auth": { 00:20:14.490 "state": "completed", 00:20:14.490 "digest": "sha256", 00:20:14.490 "dhgroup": "ffdhe8192" 00:20:14.490 } 00:20:14.490 } 00:20:14.490 ]' 00:20:14.490 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.751 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.751 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.751 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.751 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.751 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.751 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.751 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.011 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:20:15.582 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.582 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.582 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.582 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.582 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.582 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:15.582 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.582 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.582 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:15.582 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:15.842 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:15.842 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.842 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.843 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:15.843 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:15.843 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.843 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.843 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.843 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.843 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.843 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.843 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.843 00:20:16.103 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.103 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.103 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.103 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.103 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.103 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.103 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.104 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.104 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.104 { 00:20:16.104 "cntlid": 49, 00:20:16.104 "qid": 0, 00:20:16.104 "state": "enabled", 00:20:16.104 "listen_address": { 00:20:16.104 "trtype": "TCP", 00:20:16.104 "adrfam": "IPv4", 00:20:16.104 "traddr": "10.0.0.2", 00:20:16.104 "trsvcid": "4420" 00:20:16.104 }, 00:20:16.104 "peer_address": { 00:20:16.104 "trtype": "TCP", 00:20:16.104 "adrfam": "IPv4", 00:20:16.104 "traddr": "10.0.0.1", 00:20:16.104 "trsvcid": "55952" 00:20:16.104 }, 00:20:16.104 "auth": { 00:20:16.104 "state": "completed", 00:20:16.104 "digest": "sha384", 00:20:16.104 "dhgroup": "null" 00:20:16.104 } 00:20:16.104 } 00:20:16.104 ]' 00:20:16.104 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.104 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.104 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.364 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:16.364 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.364 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.364 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.364 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.364 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.308 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.568 00:20:17.568 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.568 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.568 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.829 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.829 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.829 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.829 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.829 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.829 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.829 { 00:20:17.829 "cntlid": 51, 00:20:17.829 "qid": 0, 00:20:17.829 "state": "enabled", 00:20:17.829 "listen_address": { 00:20:17.829 "trtype": "TCP", 00:20:17.829 "adrfam": "IPv4", 00:20:17.829 "traddr": "10.0.0.2", 00:20:17.829 "trsvcid": "4420" 00:20:17.829 }, 00:20:17.829 "peer_address": { 00:20:17.829 "trtype": "TCP", 00:20:17.829 "adrfam": "IPv4", 00:20:17.829 "traddr": "10.0.0.1", 00:20:17.829 "trsvcid": "55994" 00:20:17.829 }, 00:20:17.829 "auth": { 00:20:17.829 "state": "completed", 00:20:17.829 "digest": "sha384", 00:20:17.829 "dhgroup": "null" 00:20:17.829 } 00:20:17.829 } 00:20:17.829 ]' 00:20:17.829 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.829 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.829 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.829 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:17.829 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.829 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.829 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.829 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.089 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:20:19.031 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.031 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.031 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.031 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.031 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.031 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.031 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.032 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.292 00:20:19.292 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.292 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.292 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.292 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.292 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.292 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.292 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.292 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.292 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.292 { 00:20:19.292 "cntlid": 53, 00:20:19.292 "qid": 0, 00:20:19.292 "state": "enabled", 00:20:19.292 "listen_address": { 00:20:19.292 "trtype": "TCP", 00:20:19.292 "adrfam": "IPv4", 00:20:19.292 "traddr": "10.0.0.2", 00:20:19.292 "trsvcid": "4420" 00:20:19.292 }, 00:20:19.292 "peer_address": { 00:20:19.292 "trtype": "TCP", 00:20:19.292 "adrfam": "IPv4", 00:20:19.292 "traddr": "10.0.0.1", 00:20:19.292 "trsvcid": "56006" 00:20:19.292 }, 00:20:19.292 "auth": { 00:20:19.292 "state": "completed", 00:20:19.292 "digest": "sha384", 00:20:19.292 "dhgroup": "null" 00:20:19.292 } 00:20:19.292 } 00:20:19.292 ]' 00:20:19.292 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.553 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.553 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.553 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:19.553 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.553 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.553 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.553 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.813 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:20:20.383 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.383 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:20.383 00:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.383 00:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.383 00:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.383 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.383 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.383 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.643 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:20.643 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.643 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.644 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:20.644 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:20.644 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.644 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:20.644 00:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.644 00:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.644 00:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.644 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.644 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.904 00:20:20.904 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.904 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.904 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.904 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.904 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.904 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.904 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.164 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.164 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.164 { 00:20:21.164 "cntlid": 55, 00:20:21.164 "qid": 0, 00:20:21.164 "state": "enabled", 00:20:21.164 "listen_address": { 00:20:21.164 "trtype": "TCP", 00:20:21.164 "adrfam": "IPv4", 00:20:21.164 "traddr": "10.0.0.2", 00:20:21.164 "trsvcid": "4420" 00:20:21.164 }, 00:20:21.164 "peer_address": { 00:20:21.164 "trtype": "TCP", 00:20:21.164 "adrfam": "IPv4", 00:20:21.164 "traddr": "10.0.0.1", 00:20:21.164 "trsvcid": "56038" 00:20:21.164 }, 00:20:21.164 "auth": { 00:20:21.164 "state": "completed", 00:20:21.164 "digest": "sha384", 00:20:21.164 "dhgroup": "null" 00:20:21.164 } 00:20:21.164 } 00:20:21.164 ]' 00:20:21.164 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.164 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.164 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.164 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:21.164 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.164 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.164 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.164 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.424 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:20:21.994 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.994 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.994 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.994 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.994 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.994 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.994 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.994 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.994 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.254 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.514 00:20:22.514 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.515 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.515 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.775 { 00:20:22.775 "cntlid": 57, 00:20:22.775 "qid": 0, 00:20:22.775 "state": "enabled", 00:20:22.775 "listen_address": { 00:20:22.775 "trtype": "TCP", 00:20:22.775 "adrfam": "IPv4", 00:20:22.775 "traddr": "10.0.0.2", 00:20:22.775 "trsvcid": "4420" 00:20:22.775 }, 00:20:22.775 "peer_address": { 00:20:22.775 "trtype": "TCP", 00:20:22.775 "adrfam": "IPv4", 00:20:22.775 "traddr": "10.0.0.1", 00:20:22.775 "trsvcid": "56068" 00:20:22.775 }, 00:20:22.775 "auth": { 00:20:22.775 "state": "completed", 00:20:22.775 "digest": "sha384", 00:20:22.775 "dhgroup": "ffdhe2048" 00:20:22.775 } 00:20:22.775 } 00:20:22.775 ]' 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.775 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.041 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:20:23.678 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.678 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:23.678 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.678 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.678 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.678 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.678 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.678 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.939 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.200 00:20:24.200 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.200 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.200 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.200 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.200 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.200 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.200 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.200 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.200 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.200 { 00:20:24.200 "cntlid": 59, 00:20:24.200 "qid": 0, 00:20:24.200 "state": "enabled", 00:20:24.200 "listen_address": { 00:20:24.200 "trtype": "TCP", 00:20:24.200 "adrfam": "IPv4", 00:20:24.200 "traddr": "10.0.0.2", 00:20:24.200 "trsvcid": "4420" 00:20:24.200 }, 00:20:24.200 "peer_address": { 00:20:24.200 "trtype": "TCP", 00:20:24.200 "adrfam": "IPv4", 00:20:24.200 "traddr": "10.0.0.1", 00:20:24.200 "trsvcid": "56102" 00:20:24.200 }, 00:20:24.200 "auth": { 00:20:24.200 "state": "completed", 00:20:24.200 "digest": "sha384", 00:20:24.200 "dhgroup": "ffdhe2048" 00:20:24.200 } 00:20:24.200 } 00:20:24.200 ]' 00:20:24.200 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.460 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.460 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.460 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.460 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.460 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.460 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.461 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.721 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:20:25.304 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.304 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.304 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.304 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.304 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.304 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.304 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.304 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.564 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.825 00:20:25.825 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.825 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.825 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.825 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.825 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.825 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.825 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.825 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.825 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.825 { 00:20:25.825 "cntlid": 61, 00:20:25.825 "qid": 0, 00:20:25.825 "state": "enabled", 00:20:25.825 "listen_address": { 00:20:25.825 "trtype": "TCP", 00:20:25.825 "adrfam": "IPv4", 00:20:25.825 "traddr": "10.0.0.2", 00:20:25.825 "trsvcid": "4420" 00:20:25.825 }, 00:20:25.825 "peer_address": { 00:20:25.825 "trtype": "TCP", 00:20:25.825 "adrfam": "IPv4", 00:20:25.825 "traddr": "10.0.0.1", 00:20:25.825 "trsvcid": "49232" 00:20:25.825 }, 00:20:25.825 "auth": { 00:20:25.825 "state": "completed", 00:20:25.825 "digest": "sha384", 00:20:25.825 "dhgroup": "ffdhe2048" 00:20:25.825 } 00:20:25.825 } 00:20:25.825 ]' 00:20:25.825 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.087 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.087 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.087 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.087 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.087 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.087 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.087 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.348 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:20:26.921 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.921 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:26.921 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.921 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.921 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.921 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.921 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.921 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.182 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.443 00:20:27.443 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.443 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.443 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.703 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.703 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.703 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.703 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.704 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.704 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.704 { 00:20:27.704 "cntlid": 63, 00:20:27.704 "qid": 0, 00:20:27.704 "state": "enabled", 00:20:27.704 "listen_address": { 00:20:27.704 "trtype": "TCP", 00:20:27.704 "adrfam": "IPv4", 00:20:27.704 "traddr": "10.0.0.2", 00:20:27.704 "trsvcid": "4420" 00:20:27.704 }, 00:20:27.704 "peer_address": { 00:20:27.704 "trtype": "TCP", 00:20:27.704 "adrfam": "IPv4", 00:20:27.704 "traddr": "10.0.0.1", 00:20:27.704 "trsvcid": "49248" 00:20:27.704 }, 00:20:27.704 "auth": { 00:20:27.704 "state": "completed", 00:20:27.704 "digest": "sha384", 00:20:27.704 "dhgroup": "ffdhe2048" 00:20:27.704 } 00:20:27.704 } 00:20:27.704 ]' 00:20:27.704 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.704 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.704 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.704 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.704 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.704 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.704 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.704 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.964 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:20:28.536 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.536 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.536 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.536 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.536 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.536 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.536 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.536 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.536 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.796 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.057 00:20:29.057 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.057 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.057 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.318 { 00:20:29.318 "cntlid": 65, 00:20:29.318 "qid": 0, 00:20:29.318 "state": "enabled", 00:20:29.318 "listen_address": { 00:20:29.318 "trtype": "TCP", 00:20:29.318 "adrfam": "IPv4", 00:20:29.318 "traddr": "10.0.0.2", 00:20:29.318 "trsvcid": "4420" 00:20:29.318 }, 00:20:29.318 "peer_address": { 00:20:29.318 "trtype": "TCP", 00:20:29.318 "adrfam": "IPv4", 00:20:29.318 "traddr": "10.0.0.1", 00:20:29.318 "trsvcid": "49270" 00:20:29.318 }, 00:20:29.318 "auth": { 00:20:29.318 "state": "completed", 00:20:29.318 "digest": "sha384", 00:20:29.318 "dhgroup": "ffdhe3072" 00:20:29.318 } 00:20:29.318 } 00:20:29.318 ]' 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.318 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.319 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.579 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.521 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.783 00:20:30.783 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.783 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.783 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.783 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.783 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.783 00:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.783 00:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.044 00:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.044 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.044 { 00:20:31.044 "cntlid": 67, 00:20:31.044 "qid": 0, 00:20:31.044 "state": "enabled", 00:20:31.044 "listen_address": { 00:20:31.044 "trtype": "TCP", 00:20:31.044 "adrfam": "IPv4", 00:20:31.044 "traddr": "10.0.0.2", 00:20:31.044 "trsvcid": "4420" 00:20:31.044 }, 00:20:31.044 "peer_address": { 00:20:31.044 "trtype": "TCP", 00:20:31.044 "adrfam": "IPv4", 00:20:31.044 "traddr": "10.0.0.1", 00:20:31.044 "trsvcid": "49280" 00:20:31.044 }, 00:20:31.044 "auth": { 00:20:31.044 "state": "completed", 00:20:31.044 "digest": "sha384", 00:20:31.044 "dhgroup": "ffdhe3072" 00:20:31.044 } 00:20:31.044 } 00:20:31.044 ]' 00:20:31.044 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.044 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.044 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.044 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.044 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.044 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.044 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.044 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.305 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:20:31.876 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.876 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:31.876 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.876 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.876 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.876 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.876 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.876 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.137 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.398 00:20:32.398 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.398 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.398 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.658 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.658 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.658 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.658 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.658 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.658 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.658 { 00:20:32.658 "cntlid": 69, 00:20:32.658 "qid": 0, 00:20:32.658 "state": "enabled", 00:20:32.658 "listen_address": { 00:20:32.658 "trtype": "TCP", 00:20:32.658 "adrfam": "IPv4", 00:20:32.658 "traddr": "10.0.0.2", 00:20:32.658 "trsvcid": "4420" 00:20:32.658 }, 00:20:32.658 "peer_address": { 00:20:32.658 "trtype": "TCP", 00:20:32.658 "adrfam": "IPv4", 00:20:32.658 "traddr": "10.0.0.1", 00:20:32.658 "trsvcid": "49304" 00:20:32.658 }, 00:20:32.658 "auth": { 00:20:32.658 "state": "completed", 00:20:32.658 "digest": "sha384", 00:20:32.658 "dhgroup": "ffdhe3072" 00:20:32.658 } 00:20:32.658 } 00:20:32.658 ]' 00:20:32.659 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.659 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.659 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.659 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.659 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.659 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.659 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.659 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.919 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:20:33.490 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.751 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.012 00:20:34.012 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.012 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.012 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.273 { 00:20:34.273 "cntlid": 71, 00:20:34.273 "qid": 0, 00:20:34.273 "state": "enabled", 00:20:34.273 "listen_address": { 00:20:34.273 "trtype": "TCP", 00:20:34.273 "adrfam": "IPv4", 00:20:34.273 "traddr": "10.0.0.2", 00:20:34.273 "trsvcid": "4420" 00:20:34.273 }, 00:20:34.273 "peer_address": { 00:20:34.273 "trtype": "TCP", 00:20:34.273 "adrfam": "IPv4", 00:20:34.273 "traddr": "10.0.0.1", 00:20:34.273 "trsvcid": "49330" 00:20:34.273 }, 00:20:34.273 "auth": { 00:20:34.273 "state": "completed", 00:20:34.273 "digest": "sha384", 00:20:34.273 "dhgroup": "ffdhe3072" 00:20:34.273 } 00:20:34.273 } 00:20:34.273 ]' 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.273 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.534 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:20:35.103 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.364 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.625 00:20:35.625 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.625 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.625 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.885 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.885 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.885 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.885 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.885 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.885 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.885 { 00:20:35.885 "cntlid": 73, 00:20:35.885 "qid": 0, 00:20:35.885 "state": "enabled", 00:20:35.885 "listen_address": { 00:20:35.885 "trtype": "TCP", 00:20:35.885 "adrfam": "IPv4", 00:20:35.885 "traddr": "10.0.0.2", 00:20:35.885 "trsvcid": "4420" 00:20:35.885 }, 00:20:35.885 "peer_address": { 00:20:35.885 "trtype": "TCP", 00:20:35.885 "adrfam": "IPv4", 00:20:35.885 "traddr": "10.0.0.1", 00:20:35.885 "trsvcid": "47714" 00:20:35.885 }, 00:20:35.885 "auth": { 00:20:35.885 "state": "completed", 00:20:35.885 "digest": "sha384", 00:20:35.886 "dhgroup": "ffdhe4096" 00:20:35.886 } 00:20:35.886 } 00:20:35.886 ]' 00:20:35.886 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.886 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.886 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.886 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.886 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.146 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.146 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.146 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.146 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.089 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.349 00:20:37.349 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.349 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.349 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.616 { 00:20:37.616 "cntlid": 75, 00:20:37.616 "qid": 0, 00:20:37.616 "state": "enabled", 00:20:37.616 "listen_address": { 00:20:37.616 "trtype": "TCP", 00:20:37.616 "adrfam": "IPv4", 00:20:37.616 "traddr": "10.0.0.2", 00:20:37.616 "trsvcid": "4420" 00:20:37.616 }, 00:20:37.616 "peer_address": { 00:20:37.616 "trtype": "TCP", 00:20:37.616 "adrfam": "IPv4", 00:20:37.616 "traddr": "10.0.0.1", 00:20:37.616 "trsvcid": "47748" 00:20:37.616 }, 00:20:37.616 "auth": { 00:20:37.616 "state": "completed", 00:20:37.616 "digest": "sha384", 00:20:37.616 "dhgroup": "ffdhe4096" 00:20:37.616 } 00:20:37.616 } 00:20:37.616 ]' 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.616 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.916 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:20:38.491 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.491 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:38.491 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.491 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.491 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.491 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.491 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.491 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.752 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.012 00:20:39.012 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.012 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.012 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.273 { 00:20:39.273 "cntlid": 77, 00:20:39.273 "qid": 0, 00:20:39.273 "state": "enabled", 00:20:39.273 "listen_address": { 00:20:39.273 "trtype": "TCP", 00:20:39.273 "adrfam": "IPv4", 00:20:39.273 "traddr": "10.0.0.2", 00:20:39.273 "trsvcid": "4420" 00:20:39.273 }, 00:20:39.273 "peer_address": { 00:20:39.273 "trtype": "TCP", 00:20:39.273 "adrfam": "IPv4", 00:20:39.273 "traddr": "10.0.0.1", 00:20:39.273 "trsvcid": "47770" 00:20:39.273 }, 00:20:39.273 "auth": { 00:20:39.273 "state": "completed", 00:20:39.273 "digest": "sha384", 00:20:39.273 "dhgroup": "ffdhe4096" 00:20:39.273 } 00:20:39.273 } 00:20:39.273 ]' 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.273 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.534 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:20:40.105 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.105 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.105 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.105 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.105 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.105 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.105 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.105 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.365 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.625 00:20:40.625 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.625 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.625 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.885 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.885 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.885 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.885 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.885 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.885 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.885 { 00:20:40.885 "cntlid": 79, 00:20:40.885 "qid": 0, 00:20:40.885 "state": "enabled", 00:20:40.885 "listen_address": { 00:20:40.885 "trtype": "TCP", 00:20:40.885 "adrfam": "IPv4", 00:20:40.885 "traddr": "10.0.0.2", 00:20:40.885 "trsvcid": "4420" 00:20:40.885 }, 00:20:40.885 "peer_address": { 00:20:40.885 "trtype": "TCP", 00:20:40.885 "adrfam": "IPv4", 00:20:40.885 "traddr": "10.0.0.1", 00:20:40.885 "trsvcid": "47786" 00:20:40.885 }, 00:20:40.885 "auth": { 00:20:40.885 "state": "completed", 00:20:40.885 "digest": "sha384", 00:20:40.885 "dhgroup": "ffdhe4096" 00:20:40.885 } 00:20:40.885 } 00:20:40.885 ]' 00:20:40.885 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.885 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.885 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.885 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.885 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.885 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.885 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.885 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.145 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:20:41.714 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.714 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.714 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.714 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.714 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.714 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.714 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.714 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.714 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.974 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.233 00:20:42.233 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.233 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.233 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.493 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.493 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.493 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.493 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.493 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.493 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.493 { 00:20:42.493 "cntlid": 81, 00:20:42.493 "qid": 0, 00:20:42.493 "state": "enabled", 00:20:42.493 "listen_address": { 00:20:42.493 "trtype": "TCP", 00:20:42.493 "adrfam": "IPv4", 00:20:42.493 "traddr": "10.0.0.2", 00:20:42.493 "trsvcid": "4420" 00:20:42.493 }, 00:20:42.493 "peer_address": { 00:20:42.493 "trtype": "TCP", 00:20:42.493 "adrfam": "IPv4", 00:20:42.493 "traddr": "10.0.0.1", 00:20:42.493 "trsvcid": "47806" 00:20:42.493 }, 00:20:42.493 "auth": { 00:20:42.493 "state": "completed", 00:20:42.493 "digest": "sha384", 00:20:42.493 "dhgroup": "ffdhe6144" 00:20:42.493 } 00:20:42.493 } 00:20:42.493 ]' 00:20:42.493 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.493 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.493 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.754 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.754 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.754 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.754 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.754 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.754 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.696 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.697 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.697 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.697 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.697 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.697 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.697 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.266 00:20:44.266 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.266 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.266 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.266 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.266 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.266 00:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.266 00:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.266 00:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.266 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.266 { 00:20:44.266 "cntlid": 83, 00:20:44.266 "qid": 0, 00:20:44.266 "state": "enabled", 00:20:44.266 "listen_address": { 00:20:44.266 "trtype": "TCP", 00:20:44.266 "adrfam": "IPv4", 00:20:44.266 "traddr": "10.0.0.2", 00:20:44.266 "trsvcid": "4420" 00:20:44.266 }, 00:20:44.266 "peer_address": { 00:20:44.266 "trtype": "TCP", 00:20:44.266 "adrfam": "IPv4", 00:20:44.266 "traddr": "10.0.0.1", 00:20:44.266 "trsvcid": "47852" 00:20:44.266 }, 00:20:44.266 "auth": { 00:20:44.266 "state": "completed", 00:20:44.266 "digest": "sha384", 00:20:44.266 "dhgroup": "ffdhe6144" 00:20:44.266 } 00:20:44.266 } 00:20:44.266 ]' 00:20:44.266 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.527 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.527 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.527 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.527 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.527 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.527 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.527 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.787 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:20:45.356 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.356 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:45.356 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.356 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.356 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.356 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.356 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.356 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.616 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.876 00:20:45.876 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.876 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.876 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.136 { 00:20:46.136 "cntlid": 85, 00:20:46.136 "qid": 0, 00:20:46.136 "state": "enabled", 00:20:46.136 "listen_address": { 00:20:46.136 "trtype": "TCP", 00:20:46.136 "adrfam": "IPv4", 00:20:46.136 "traddr": "10.0.0.2", 00:20:46.136 "trsvcid": "4420" 00:20:46.136 }, 00:20:46.136 "peer_address": { 00:20:46.136 "trtype": "TCP", 00:20:46.136 "adrfam": "IPv4", 00:20:46.136 "traddr": "10.0.0.1", 00:20:46.136 "trsvcid": "44442" 00:20:46.136 }, 00:20:46.136 "auth": { 00:20:46.136 "state": "completed", 00:20:46.136 "digest": "sha384", 00:20:46.136 "dhgroup": "ffdhe6144" 00:20:46.136 } 00:20:46.136 } 00:20:46.136 ]' 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.136 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.396 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.337 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.597 00:20:47.597 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.597 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.597 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.857 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.857 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.857 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.857 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.857 00:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.857 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.857 { 00:20:47.857 "cntlid": 87, 00:20:47.857 "qid": 0, 00:20:47.857 "state": "enabled", 00:20:47.857 "listen_address": { 00:20:47.857 "trtype": "TCP", 00:20:47.857 "adrfam": "IPv4", 00:20:47.857 "traddr": "10.0.0.2", 00:20:47.857 "trsvcid": "4420" 00:20:47.857 }, 00:20:47.857 "peer_address": { 00:20:47.857 "trtype": "TCP", 00:20:47.857 "adrfam": "IPv4", 00:20:47.857 "traddr": "10.0.0.1", 00:20:47.857 "trsvcid": "44468" 00:20:47.857 }, 00:20:47.857 "auth": { 00:20:47.857 "state": "completed", 00:20:47.857 "digest": "sha384", 00:20:47.857 "dhgroup": "ffdhe6144" 00:20:47.857 } 00:20:47.857 } 00:20:47.857 ]' 00:20:47.857 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.857 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.857 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.857 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.857 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.857 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.857 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.857 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.118 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:20:49.058 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.058 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.630 00:20:49.630 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.630 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.630 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.891 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.891 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.891 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.891 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.891 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.891 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.891 { 00:20:49.891 "cntlid": 89, 00:20:49.891 "qid": 0, 00:20:49.891 "state": "enabled", 00:20:49.891 "listen_address": { 00:20:49.891 "trtype": "TCP", 00:20:49.891 "adrfam": "IPv4", 00:20:49.891 "traddr": "10.0.0.2", 00:20:49.891 "trsvcid": "4420" 00:20:49.891 }, 00:20:49.891 "peer_address": { 00:20:49.891 "trtype": "TCP", 00:20:49.891 "adrfam": "IPv4", 00:20:49.891 "traddr": "10.0.0.1", 00:20:49.891 "trsvcid": "44496" 00:20:49.891 }, 00:20:49.891 "auth": { 00:20:49.891 "state": "completed", 00:20:49.891 "digest": "sha384", 00:20:49.891 "dhgroup": "ffdhe8192" 00:20:49.891 } 00:20:49.891 } 00:20:49.891 ]' 00:20:49.891 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.891 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.891 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.891 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.891 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.891 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.891 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.891 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.151 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:20:50.721 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.721 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.721 00:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.721 00:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.721 00:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.721 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.721 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.721 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.981 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.551 00:20:51.551 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.551 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.551 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.812 { 00:20:51.812 "cntlid": 91, 00:20:51.812 "qid": 0, 00:20:51.812 "state": "enabled", 00:20:51.812 "listen_address": { 00:20:51.812 "trtype": "TCP", 00:20:51.812 "adrfam": "IPv4", 00:20:51.812 "traddr": "10.0.0.2", 00:20:51.812 "trsvcid": "4420" 00:20:51.812 }, 00:20:51.812 "peer_address": { 00:20:51.812 "trtype": "TCP", 00:20:51.812 "adrfam": "IPv4", 00:20:51.812 "traddr": "10.0.0.1", 00:20:51.812 "trsvcid": "44534" 00:20:51.812 }, 00:20:51.812 "auth": { 00:20:51.812 "state": "completed", 00:20:51.812 "digest": "sha384", 00:20:51.812 "dhgroup": "ffdhe8192" 00:20:51.812 } 00:20:51.812 } 00:20:51.812 ]' 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.812 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.812 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.812 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.812 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.073 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:20:52.686 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.686 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.686 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.686 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.686 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.686 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.686 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.686 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.946 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.515 00:20:53.515 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.515 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.515 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.775 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.775 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.775 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.775 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.775 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.775 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.775 { 00:20:53.775 "cntlid": 93, 00:20:53.775 "qid": 0, 00:20:53.775 "state": "enabled", 00:20:53.775 "listen_address": { 00:20:53.775 "trtype": "TCP", 00:20:53.775 "adrfam": "IPv4", 00:20:53.775 "traddr": "10.0.0.2", 00:20:53.775 "trsvcid": "4420" 00:20:53.775 }, 00:20:53.775 "peer_address": { 00:20:53.775 "trtype": "TCP", 00:20:53.775 "adrfam": "IPv4", 00:20:53.775 "traddr": "10.0.0.1", 00:20:53.775 "trsvcid": "44556" 00:20:53.775 }, 00:20:53.775 "auth": { 00:20:53.775 "state": "completed", 00:20:53.775 "digest": "sha384", 00:20:53.775 "dhgroup": "ffdhe8192" 00:20:53.775 } 00:20:53.775 } 00:20:53.775 ]' 00:20:53.775 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.776 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.776 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.776 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.776 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.776 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.776 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.776 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.036 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:20:54.607 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.607 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:54.607 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.607 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.607 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.607 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.607 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.607 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.867 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:54.867 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.867 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.867 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.867 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:54.867 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.867 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:54.867 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.867 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.867 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.867 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.867 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.438 00:20:55.438 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.438 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.438 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.438 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.438 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.438 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.438 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.438 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.438 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.438 { 00:20:55.438 "cntlid": 95, 00:20:55.438 "qid": 0, 00:20:55.438 "state": "enabled", 00:20:55.438 "listen_address": { 00:20:55.438 "trtype": "TCP", 00:20:55.438 "adrfam": "IPv4", 00:20:55.438 "traddr": "10.0.0.2", 00:20:55.438 "trsvcid": "4420" 00:20:55.438 }, 00:20:55.438 "peer_address": { 00:20:55.438 "trtype": "TCP", 00:20:55.438 "adrfam": "IPv4", 00:20:55.438 "traddr": "10.0.0.1", 00:20:55.438 "trsvcid": "45648" 00:20:55.438 }, 00:20:55.438 "auth": { 00:20:55.438 "state": "completed", 00:20:55.438 "digest": "sha384", 00:20:55.438 "dhgroup": "ffdhe8192" 00:20:55.438 } 00:20:55.438 } 00:20:55.438 ]' 00:20:55.438 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.699 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.699 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.699 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.699 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.699 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.699 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.699 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.959 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:20:56.529 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.530 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.530 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.530 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.530 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.530 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:56.530 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.530 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.530 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.530 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.790 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.050 00:20:57.050 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.050 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.050 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.050 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.050 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.050 00:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.050 00:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.050 00:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.050 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.050 { 00:20:57.050 "cntlid": 97, 00:20:57.050 "qid": 0, 00:20:57.050 "state": "enabled", 00:20:57.050 "listen_address": { 00:20:57.050 "trtype": "TCP", 00:20:57.050 "adrfam": "IPv4", 00:20:57.050 "traddr": "10.0.0.2", 00:20:57.050 "trsvcid": "4420" 00:20:57.050 }, 00:20:57.050 "peer_address": { 00:20:57.050 "trtype": "TCP", 00:20:57.050 "adrfam": "IPv4", 00:20:57.050 "traddr": "10.0.0.1", 00:20:57.050 "trsvcid": "45674" 00:20:57.051 }, 00:20:57.051 "auth": { 00:20:57.051 "state": "completed", 00:20:57.051 "digest": "sha512", 00:20:57.051 "dhgroup": "null" 00:20:57.051 } 00:20:57.051 } 00:20:57.051 ]' 00:20:57.051 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.311 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.311 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.311 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:57.311 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.311 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.311 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.311 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.571 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:20:58.141 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.141 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:58.141 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.141 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.141 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.141 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.141 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.141 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.402 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.663 00:20:58.663 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.663 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.663 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.663 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.663 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.663 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.663 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.663 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.663 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.663 { 00:20:58.663 "cntlid": 99, 00:20:58.663 "qid": 0, 00:20:58.663 "state": "enabled", 00:20:58.663 "listen_address": { 00:20:58.663 "trtype": "TCP", 00:20:58.663 "adrfam": "IPv4", 00:20:58.663 "traddr": "10.0.0.2", 00:20:58.663 "trsvcid": "4420" 00:20:58.663 }, 00:20:58.663 "peer_address": { 00:20:58.663 "trtype": "TCP", 00:20:58.663 "adrfam": "IPv4", 00:20:58.663 "traddr": "10.0.0.1", 00:20:58.663 "trsvcid": "45716" 00:20:58.663 }, 00:20:58.663 "auth": { 00:20:58.663 "state": "completed", 00:20:58.663 "digest": "sha512", 00:20:58.663 "dhgroup": "null" 00:20:58.663 } 00:20:58.663 } 00:20:58.663 ]' 00:20:58.663 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.923 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.923 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.923 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:58.923 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.923 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.923 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.923 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.184 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:20:59.755 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.755 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:59.755 00:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.755 00:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.755 00:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.755 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.755 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.755 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.015 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.276 00:21:00.276 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.276 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.276 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.276 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.276 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.276 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.276 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.276 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.276 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.276 { 00:21:00.276 "cntlid": 101, 00:21:00.276 "qid": 0, 00:21:00.276 "state": "enabled", 00:21:00.276 "listen_address": { 00:21:00.276 "trtype": "TCP", 00:21:00.276 "adrfam": "IPv4", 00:21:00.276 "traddr": "10.0.0.2", 00:21:00.276 "trsvcid": "4420" 00:21:00.276 }, 00:21:00.276 "peer_address": { 00:21:00.276 "trtype": "TCP", 00:21:00.276 "adrfam": "IPv4", 00:21:00.276 "traddr": "10.0.0.1", 00:21:00.276 "trsvcid": "45736" 00:21:00.276 }, 00:21:00.276 "auth": { 00:21:00.276 "state": "completed", 00:21:00.276 "digest": "sha512", 00:21:00.276 "dhgroup": "null" 00:21:00.276 } 00:21:00.276 } 00:21:00.276 ]' 00:21:00.276 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.536 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.536 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.536 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:00.536 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.536 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.536 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.537 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.537 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.477 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.478 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.738 00:21:01.738 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.738 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.738 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.998 { 00:21:01.998 "cntlid": 103, 00:21:01.998 "qid": 0, 00:21:01.998 "state": "enabled", 00:21:01.998 "listen_address": { 00:21:01.998 "trtype": "TCP", 00:21:01.998 "adrfam": "IPv4", 00:21:01.998 "traddr": "10.0.0.2", 00:21:01.998 "trsvcid": "4420" 00:21:01.998 }, 00:21:01.998 "peer_address": { 00:21:01.998 "trtype": "TCP", 00:21:01.998 "adrfam": "IPv4", 00:21:01.998 "traddr": "10.0.0.1", 00:21:01.998 "trsvcid": "45770" 00:21:01.998 }, 00:21:01.998 "auth": { 00:21:01.998 "state": "completed", 00:21:01.998 "digest": "sha512", 00:21:01.998 "dhgroup": "null" 00:21:01.998 } 00:21:01.998 } 00:21:01.998 ]' 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.998 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.259 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:21:02.831 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.831 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:02.831 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:02.831 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.831 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.831 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.831 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.831 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.831 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.091 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.352 00:21:03.352 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.352 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.352 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.613 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.613 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.613 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.613 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.613 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.613 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.613 { 00:21:03.613 "cntlid": 105, 00:21:03.613 "qid": 0, 00:21:03.613 "state": "enabled", 00:21:03.613 "listen_address": { 00:21:03.613 "trtype": "TCP", 00:21:03.613 "adrfam": "IPv4", 00:21:03.613 "traddr": "10.0.0.2", 00:21:03.613 "trsvcid": "4420" 00:21:03.613 }, 00:21:03.613 "peer_address": { 00:21:03.613 "trtype": "TCP", 00:21:03.613 "adrfam": "IPv4", 00:21:03.613 "traddr": "10.0.0.1", 00:21:03.613 "trsvcid": "45802" 00:21:03.613 }, 00:21:03.613 "auth": { 00:21:03.613 "state": "completed", 00:21:03.613 "digest": "sha512", 00:21:03.613 "dhgroup": "ffdhe2048" 00:21:03.613 } 00:21:03.613 } 00:21:03.613 ]' 00:21:03.613 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.613 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.613 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.613 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.614 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.614 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.614 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.614 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.874 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:21:04.445 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.445 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:04.445 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.445 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.705 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.973 00:21:04.973 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.973 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.973 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.237 { 00:21:05.237 "cntlid": 107, 00:21:05.237 "qid": 0, 00:21:05.237 "state": "enabled", 00:21:05.237 "listen_address": { 00:21:05.237 "trtype": "TCP", 00:21:05.237 "adrfam": "IPv4", 00:21:05.237 "traddr": "10.0.0.2", 00:21:05.237 "trsvcid": "4420" 00:21:05.237 }, 00:21:05.237 "peer_address": { 00:21:05.237 "trtype": "TCP", 00:21:05.237 "adrfam": "IPv4", 00:21:05.237 "traddr": "10.0.0.1", 00:21:05.237 "trsvcid": "43104" 00:21:05.237 }, 00:21:05.237 "auth": { 00:21:05.237 "state": "completed", 00:21:05.237 "digest": "sha512", 00:21:05.237 "dhgroup": "ffdhe2048" 00:21:05.237 } 00:21:05.237 } 00:21:05.237 ]' 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.237 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.498 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:21:06.439 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.439 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:06.439 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.439 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.439 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.439 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.440 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.700 00:21:06.700 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.700 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.700 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.700 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.700 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.700 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.700 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.700 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.700 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.700 { 00:21:06.700 "cntlid": 109, 00:21:06.700 "qid": 0, 00:21:06.700 "state": "enabled", 00:21:06.700 "listen_address": { 00:21:06.700 "trtype": "TCP", 00:21:06.700 "adrfam": "IPv4", 00:21:06.700 "traddr": "10.0.0.2", 00:21:06.700 "trsvcid": "4420" 00:21:06.700 }, 00:21:06.700 "peer_address": { 00:21:06.700 "trtype": "TCP", 00:21:06.700 "adrfam": "IPv4", 00:21:06.700 "traddr": "10.0.0.1", 00:21:06.700 "trsvcid": "43138" 00:21:06.700 }, 00:21:06.700 "auth": { 00:21:06.700 "state": "completed", 00:21:06.700 "digest": "sha512", 00:21:06.700 "dhgroup": "ffdhe2048" 00:21:06.700 } 00:21:06.700 } 00:21:06.700 ]' 00:21:06.700 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.960 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.960 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.960 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.960 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.960 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.960 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.960 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.251 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:21:07.821 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.821 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.821 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.821 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.821 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.821 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.821 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.821 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.082 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.342 00:21:08.342 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.342 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.342 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.342 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.342 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.342 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.342 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.603 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.603 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.603 { 00:21:08.603 "cntlid": 111, 00:21:08.603 "qid": 0, 00:21:08.603 "state": "enabled", 00:21:08.603 "listen_address": { 00:21:08.603 "trtype": "TCP", 00:21:08.603 "adrfam": "IPv4", 00:21:08.603 "traddr": "10.0.0.2", 00:21:08.603 "trsvcid": "4420" 00:21:08.603 }, 00:21:08.603 "peer_address": { 00:21:08.603 "trtype": "TCP", 00:21:08.603 "adrfam": "IPv4", 00:21:08.603 "traddr": "10.0.0.1", 00:21:08.603 "trsvcid": "43148" 00:21:08.603 }, 00:21:08.603 "auth": { 00:21:08.603 "state": "completed", 00:21:08.603 "digest": "sha512", 00:21:08.603 "dhgroup": "ffdhe2048" 00:21:08.603 } 00:21:08.603 } 00:21:08.603 ]' 00:21:08.603 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.603 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.603 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.603 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.603 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.603 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.603 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.603 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.863 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:21:09.434 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.434 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.434 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.434 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.434 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.434 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.434 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.434 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.434 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.694 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.954 00:21:09.954 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.954 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.954 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.215 { 00:21:10.215 "cntlid": 113, 00:21:10.215 "qid": 0, 00:21:10.215 "state": "enabled", 00:21:10.215 "listen_address": { 00:21:10.215 "trtype": "TCP", 00:21:10.215 "adrfam": "IPv4", 00:21:10.215 "traddr": "10.0.0.2", 00:21:10.215 "trsvcid": "4420" 00:21:10.215 }, 00:21:10.215 "peer_address": { 00:21:10.215 "trtype": "TCP", 00:21:10.215 "adrfam": "IPv4", 00:21:10.215 "traddr": "10.0.0.1", 00:21:10.215 "trsvcid": "43168" 00:21:10.215 }, 00:21:10.215 "auth": { 00:21:10.215 "state": "completed", 00:21:10.215 "digest": "sha512", 00:21:10.215 "dhgroup": "ffdhe3072" 00:21:10.215 } 00:21:10.215 } 00:21:10.215 ]' 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.215 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.474 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:21:11.045 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.045 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.045 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.045 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.045 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.045 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.045 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.045 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.304 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.564 00:21:11.564 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.564 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.564 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.824 { 00:21:11.824 "cntlid": 115, 00:21:11.824 "qid": 0, 00:21:11.824 "state": "enabled", 00:21:11.824 "listen_address": { 00:21:11.824 "trtype": "TCP", 00:21:11.824 "adrfam": "IPv4", 00:21:11.824 "traddr": "10.0.0.2", 00:21:11.824 "trsvcid": "4420" 00:21:11.824 }, 00:21:11.824 "peer_address": { 00:21:11.824 "trtype": "TCP", 00:21:11.824 "adrfam": "IPv4", 00:21:11.824 "traddr": "10.0.0.1", 00:21:11.824 "trsvcid": "43194" 00:21:11.824 }, 00:21:11.824 "auth": { 00:21:11.824 "state": "completed", 00:21:11.824 "digest": "sha512", 00:21:11.824 "dhgroup": "ffdhe3072" 00:21:11.824 } 00:21:11.824 } 00:21:11.824 ]' 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.824 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.824 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.824 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.824 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.084 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:21:13.028 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.028 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.028 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.028 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.028 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.028 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.028 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.028 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.028 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.288 00:21:13.288 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.288 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.288 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.288 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.288 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.288 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.288 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.288 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.288 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.288 { 00:21:13.288 "cntlid": 117, 00:21:13.288 "qid": 0, 00:21:13.288 "state": "enabled", 00:21:13.288 "listen_address": { 00:21:13.288 "trtype": "TCP", 00:21:13.288 "adrfam": "IPv4", 00:21:13.288 "traddr": "10.0.0.2", 00:21:13.288 "trsvcid": "4420" 00:21:13.288 }, 00:21:13.288 "peer_address": { 00:21:13.288 "trtype": "TCP", 00:21:13.288 "adrfam": "IPv4", 00:21:13.288 "traddr": "10.0.0.1", 00:21:13.288 "trsvcid": "43220" 00:21:13.288 }, 00:21:13.288 "auth": { 00:21:13.288 "state": "completed", 00:21:13.288 "digest": "sha512", 00:21:13.288 "dhgroup": "ffdhe3072" 00:21:13.288 } 00:21:13.288 } 00:21:13.288 ]' 00:21:13.288 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.548 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.548 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.548 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.548 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.548 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.548 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.548 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.809 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:21:14.381 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.381 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.381 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.381 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.381 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.381 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.381 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.381 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.642 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.903 00:21:14.903 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.903 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.903 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.903 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.903 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.903 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.903 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.903 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.903 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.903 { 00:21:14.903 "cntlid": 119, 00:21:14.903 "qid": 0, 00:21:14.903 "state": "enabled", 00:21:14.903 "listen_address": { 00:21:14.903 "trtype": "TCP", 00:21:14.903 "adrfam": "IPv4", 00:21:14.903 "traddr": "10.0.0.2", 00:21:14.903 "trsvcid": "4420" 00:21:14.903 }, 00:21:14.903 "peer_address": { 00:21:14.903 "trtype": "TCP", 00:21:14.903 "adrfam": "IPv4", 00:21:14.903 "traddr": "10.0.0.1", 00:21:14.903 "trsvcid": "46050" 00:21:14.903 }, 00:21:14.903 "auth": { 00:21:14.903 "state": "completed", 00:21:14.903 "digest": "sha512", 00:21:14.903 "dhgroup": "ffdhe3072" 00:21:14.903 } 00:21:14.903 } 00:21:14.903 ]' 00:21:15.163 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.163 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.163 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.163 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.163 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.163 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.163 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.163 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.423 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:21:15.994 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.995 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.995 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.995 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.995 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.995 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.995 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.995 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.995 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.256 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.517 00:21:16.517 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.517 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.517 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.778 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.778 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.778 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.778 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.778 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.778 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.778 { 00:21:16.778 "cntlid": 121, 00:21:16.778 "qid": 0, 00:21:16.778 "state": "enabled", 00:21:16.778 "listen_address": { 00:21:16.778 "trtype": "TCP", 00:21:16.778 "adrfam": "IPv4", 00:21:16.778 "traddr": "10.0.0.2", 00:21:16.778 "trsvcid": "4420" 00:21:16.778 }, 00:21:16.778 "peer_address": { 00:21:16.778 "trtype": "TCP", 00:21:16.778 "adrfam": "IPv4", 00:21:16.778 "traddr": "10.0.0.1", 00:21:16.778 "trsvcid": "46080" 00:21:16.778 }, 00:21:16.778 "auth": { 00:21:16.778 "state": "completed", 00:21:16.778 "digest": "sha512", 00:21:16.778 "dhgroup": "ffdhe4096" 00:21:16.778 } 00:21:16.778 } 00:21:16.778 ]' 00:21:16.778 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.779 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.779 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.779 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.779 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.779 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.779 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.779 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.039 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:21:17.982 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.982 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.982 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.982 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.982 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.982 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.982 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.982 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.982 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.243 00:21:18.243 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.243 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.243 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.243 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.243 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.243 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.243 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.505 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.505 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.505 { 00:21:18.505 "cntlid": 123, 00:21:18.505 "qid": 0, 00:21:18.505 "state": "enabled", 00:21:18.505 "listen_address": { 00:21:18.505 "trtype": "TCP", 00:21:18.505 "adrfam": "IPv4", 00:21:18.505 "traddr": "10.0.0.2", 00:21:18.505 "trsvcid": "4420" 00:21:18.505 }, 00:21:18.505 "peer_address": { 00:21:18.505 "trtype": "TCP", 00:21:18.505 "adrfam": "IPv4", 00:21:18.505 "traddr": "10.0.0.1", 00:21:18.505 "trsvcid": "46110" 00:21:18.505 }, 00:21:18.505 "auth": { 00:21:18.505 "state": "completed", 00:21:18.505 "digest": "sha512", 00:21:18.505 "dhgroup": "ffdhe4096" 00:21:18.505 } 00:21:18.505 } 00:21:18.505 ]' 00:21:18.505 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.505 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.505 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.505 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.505 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.505 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.505 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.505 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.765 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:21:19.336 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.336 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:19.336 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.336 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.336 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.336 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.336 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.336 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.597 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.858 00:21:19.858 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.858 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.858 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.118 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.118 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.118 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.118 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.118 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.118 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.118 { 00:21:20.118 "cntlid": 125, 00:21:20.118 "qid": 0, 00:21:20.118 "state": "enabled", 00:21:20.118 "listen_address": { 00:21:20.118 "trtype": "TCP", 00:21:20.118 "adrfam": "IPv4", 00:21:20.118 "traddr": "10.0.0.2", 00:21:20.118 "trsvcid": "4420" 00:21:20.118 }, 00:21:20.118 "peer_address": { 00:21:20.118 "trtype": "TCP", 00:21:20.118 "adrfam": "IPv4", 00:21:20.118 "traddr": "10.0.0.1", 00:21:20.118 "trsvcid": "46134" 00:21:20.118 }, 00:21:20.118 "auth": { 00:21:20.118 "state": "completed", 00:21:20.118 "digest": "sha512", 00:21:20.118 "dhgroup": "ffdhe4096" 00:21:20.118 } 00:21:20.118 } 00:21:20.118 ]' 00:21:20.118 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.118 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.118 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.118 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.119 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.119 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.119 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.119 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.380 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:21:20.951 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.212 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.473 00:21:21.473 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.473 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.473 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.793 { 00:21:21.793 "cntlid": 127, 00:21:21.793 "qid": 0, 00:21:21.793 "state": "enabled", 00:21:21.793 "listen_address": { 00:21:21.793 "trtype": "TCP", 00:21:21.793 "adrfam": "IPv4", 00:21:21.793 "traddr": "10.0.0.2", 00:21:21.793 "trsvcid": "4420" 00:21:21.793 }, 00:21:21.793 "peer_address": { 00:21:21.793 "trtype": "TCP", 00:21:21.793 "adrfam": "IPv4", 00:21:21.793 "traddr": "10.0.0.1", 00:21:21.793 "trsvcid": "46156" 00:21:21.793 }, 00:21:21.793 "auth": { 00:21:21.793 "state": "completed", 00:21:21.793 "digest": "sha512", 00:21:21.793 "dhgroup": "ffdhe4096" 00:21:21.793 } 00:21:21.793 } 00:21:21.793 ]' 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.793 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.054 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:21:22.625 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.886 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.886 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.886 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.886 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.886 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.886 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.886 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.886 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.886 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.147 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.408 { 00:21:23.408 "cntlid": 129, 00:21:23.408 "qid": 0, 00:21:23.408 "state": "enabled", 00:21:23.408 "listen_address": { 00:21:23.408 "trtype": "TCP", 00:21:23.408 "adrfam": "IPv4", 00:21:23.408 "traddr": "10.0.0.2", 00:21:23.408 "trsvcid": "4420" 00:21:23.408 }, 00:21:23.408 "peer_address": { 00:21:23.408 "trtype": "TCP", 00:21:23.408 "adrfam": "IPv4", 00:21:23.408 "traddr": "10.0.0.1", 00:21:23.408 "trsvcid": "46194" 00:21:23.408 }, 00:21:23.408 "auth": { 00:21:23.408 "state": "completed", 00:21:23.408 "digest": "sha512", 00:21:23.408 "dhgroup": "ffdhe6144" 00:21:23.408 } 00:21:23.408 } 00:21:23.408 ]' 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.408 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.670 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:23.670 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.670 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.670 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.670 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.670 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.612 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.183 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.183 { 00:21:25.183 "cntlid": 131, 00:21:25.183 "qid": 0, 00:21:25.183 "state": "enabled", 00:21:25.183 "listen_address": { 00:21:25.183 "trtype": "TCP", 00:21:25.183 "adrfam": "IPv4", 00:21:25.183 "traddr": "10.0.0.2", 00:21:25.183 "trsvcid": "4420" 00:21:25.183 }, 00:21:25.183 "peer_address": { 00:21:25.183 "trtype": "TCP", 00:21:25.183 "adrfam": "IPv4", 00:21:25.183 "traddr": "10.0.0.1", 00:21:25.183 "trsvcid": "54868" 00:21:25.183 }, 00:21:25.183 "auth": { 00:21:25.183 "state": "completed", 00:21:25.183 "digest": "sha512", 00:21:25.183 "dhgroup": "ffdhe6144" 00:21:25.183 } 00:21:25.183 } 00:21:25.183 ]' 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.183 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.444 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.444 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.444 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.444 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.444 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.444 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.386 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.958 00:21:26.958 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.958 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.958 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.958 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.958 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.958 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.958 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.958 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.958 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.958 { 00:21:26.958 "cntlid": 133, 00:21:26.958 "qid": 0, 00:21:26.958 "state": "enabled", 00:21:26.958 "listen_address": { 00:21:26.958 "trtype": "TCP", 00:21:26.958 "adrfam": "IPv4", 00:21:26.958 "traddr": "10.0.0.2", 00:21:26.958 "trsvcid": "4420" 00:21:26.958 }, 00:21:26.958 "peer_address": { 00:21:26.958 "trtype": "TCP", 00:21:26.958 "adrfam": "IPv4", 00:21:26.958 "traddr": "10.0.0.1", 00:21:26.958 "trsvcid": "54900" 00:21:26.958 }, 00:21:26.958 "auth": { 00:21:26.958 "state": "completed", 00:21:26.958 "digest": "sha512", 00:21:26.958 "dhgroup": "ffdhe6144" 00:21:26.958 } 00:21:26.958 } 00:21:26.958 ]' 00:21:26.958 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.219 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.219 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.219 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.219 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.219 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.219 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.219 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.479 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:21:28.050 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.050 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.050 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.050 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.050 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.050 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.050 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.051 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.312 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.573 00:21:28.573 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.573 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.573 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.834 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.834 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.834 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.834 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.834 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.834 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.834 { 00:21:28.834 "cntlid": 135, 00:21:28.834 "qid": 0, 00:21:28.834 "state": "enabled", 00:21:28.834 "listen_address": { 00:21:28.834 "trtype": "TCP", 00:21:28.834 "adrfam": "IPv4", 00:21:28.834 "traddr": "10.0.0.2", 00:21:28.834 "trsvcid": "4420" 00:21:28.834 }, 00:21:28.834 "peer_address": { 00:21:28.834 "trtype": "TCP", 00:21:28.834 "adrfam": "IPv4", 00:21:28.834 "traddr": "10.0.0.1", 00:21:28.834 "trsvcid": "54914" 00:21:28.834 }, 00:21:28.834 "auth": { 00:21:28.834 "state": "completed", 00:21:28.834 "digest": "sha512", 00:21:28.834 "dhgroup": "ffdhe6144" 00:21:28.834 } 00:21:28.834 } 00:21:28.834 ]' 00:21:28.834 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.834 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.834 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.834 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:28.834 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.095 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.095 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.095 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.095 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.038 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.609 00:21:30.609 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.609 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.609 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.870 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.870 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.870 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.870 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.870 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.870 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.870 { 00:21:30.870 "cntlid": 137, 00:21:30.870 "qid": 0, 00:21:30.870 "state": "enabled", 00:21:30.870 "listen_address": { 00:21:30.870 "trtype": "TCP", 00:21:30.870 "adrfam": "IPv4", 00:21:30.870 "traddr": "10.0.0.2", 00:21:30.870 "trsvcid": "4420" 00:21:30.870 }, 00:21:30.870 "peer_address": { 00:21:30.870 "trtype": "TCP", 00:21:30.870 "adrfam": "IPv4", 00:21:30.870 "traddr": "10.0.0.1", 00:21:30.870 "trsvcid": "54938" 00:21:30.870 }, 00:21:30.870 "auth": { 00:21:30.870 "state": "completed", 00:21:30.870 "digest": "sha512", 00:21:30.870 "dhgroup": "ffdhe8192" 00:21:30.870 } 00:21:30.870 } 00:21:30.870 ]' 00:21:30.870 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.870 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.870 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.870 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.870 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.870 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.870 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.870 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.131 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:21:31.702 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.702 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.702 00:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.963 00:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.963 00:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.963 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.963 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.963 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.963 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.534 00:21:32.534 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.534 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.535 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.795 { 00:21:32.795 "cntlid": 139, 00:21:32.795 "qid": 0, 00:21:32.795 "state": "enabled", 00:21:32.795 "listen_address": { 00:21:32.795 "trtype": "TCP", 00:21:32.795 "adrfam": "IPv4", 00:21:32.795 "traddr": "10.0.0.2", 00:21:32.795 "trsvcid": "4420" 00:21:32.795 }, 00:21:32.795 "peer_address": { 00:21:32.795 "trtype": "TCP", 00:21:32.795 "adrfam": "IPv4", 00:21:32.795 "traddr": "10.0.0.1", 00:21:32.795 "trsvcid": "54962" 00:21:32.795 }, 00:21:32.795 "auth": { 00:21:32.795 "state": "completed", 00:21:32.795 "digest": "sha512", 00:21:32.795 "dhgroup": "ffdhe8192" 00:21:32.795 } 00:21:32.795 } 00:21:32.795 ]' 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.795 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.795 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.795 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.795 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.056 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:Njc5ZjdkZTZlM2E3YzhlMDFlOWM2NzM2MDNiZDA5MDK8Pg0n: --dhchap-ctrl-secret DHHC-1:02:YWFlNmU1NTkzMWM0N2MxMDUzNTAzZDk2OTliODFjM2YwODU2MDcxZjUxYzYwYzM1SJ3QbA==: 00:21:33.998 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.998 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:33.998 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.998 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.998 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.998 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.998 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.998 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.998 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.570 00:21:34.570 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.570 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.570 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.570 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.570 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.570 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.570 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.570 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.570 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.570 { 00:21:34.570 "cntlid": 141, 00:21:34.570 "qid": 0, 00:21:34.571 "state": "enabled", 00:21:34.571 "listen_address": { 00:21:34.571 "trtype": "TCP", 00:21:34.571 "adrfam": "IPv4", 00:21:34.571 "traddr": "10.0.0.2", 00:21:34.571 "trsvcid": "4420" 00:21:34.571 }, 00:21:34.571 "peer_address": { 00:21:34.571 "trtype": "TCP", 00:21:34.571 "adrfam": "IPv4", 00:21:34.571 "traddr": "10.0.0.1", 00:21:34.571 "trsvcid": "54992" 00:21:34.571 }, 00:21:34.571 "auth": { 00:21:34.571 "state": "completed", 00:21:34.571 "digest": "sha512", 00:21:34.571 "dhgroup": "ffdhe8192" 00:21:34.571 } 00:21:34.571 } 00:21:34.571 ]' 00:21:34.571 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.831 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.831 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.831 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.831 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.831 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.831 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.831 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.091 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIxMjMyZjM4OWUyOGQ5NWFmZmM0YjY1NmI4MmJiMTUzYjJjOWIwZjAxYmY3NGJmGAsJIg==: --dhchap-ctrl-secret DHHC-1:01:Y2M0NTIwMjBiMGM4NjBmNGU5MTEwNmUxMGM1MTEzN2WY46AH: 00:21:35.661 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.661 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.661 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.661 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.661 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.661 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.661 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.661 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.922 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.492 00:21:36.492 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.493 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.493 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.493 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.493 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.493 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.493 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.493 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.493 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.493 { 00:21:36.493 "cntlid": 143, 00:21:36.493 "qid": 0, 00:21:36.493 "state": "enabled", 00:21:36.493 "listen_address": { 00:21:36.493 "trtype": "TCP", 00:21:36.493 "adrfam": "IPv4", 00:21:36.493 "traddr": "10.0.0.2", 00:21:36.493 "trsvcid": "4420" 00:21:36.493 }, 00:21:36.493 "peer_address": { 00:21:36.493 "trtype": "TCP", 00:21:36.493 "adrfam": "IPv4", 00:21:36.493 "traddr": "10.0.0.1", 00:21:36.493 "trsvcid": "36940" 00:21:36.493 }, 00:21:36.493 "auth": { 00:21:36.493 "state": "completed", 00:21:36.493 "digest": "sha512", 00:21:36.493 "dhgroup": "ffdhe8192" 00:21:36.493 } 00:21:36.493 } 00:21:36.493 ]' 00:21:36.784 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.784 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.784 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.784 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.784 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.784 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.784 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.784 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.072 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.644 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.904 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:37.904 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.904 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.904 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:37.904 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:37.904 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.904 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.904 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.904 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.904 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.904 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.904 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.474 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.474 { 00:21:38.474 "cntlid": 145, 00:21:38.474 "qid": 0, 00:21:38.474 "state": "enabled", 00:21:38.474 "listen_address": { 00:21:38.474 "trtype": "TCP", 00:21:38.474 "adrfam": "IPv4", 00:21:38.474 "traddr": "10.0.0.2", 00:21:38.474 "trsvcid": "4420" 00:21:38.474 }, 00:21:38.474 "peer_address": { 00:21:38.474 "trtype": "TCP", 00:21:38.474 "adrfam": "IPv4", 00:21:38.474 "traddr": "10.0.0.1", 00:21:38.474 "trsvcid": "36958" 00:21:38.474 }, 00:21:38.474 "auth": { 00:21:38.474 "state": "completed", 00:21:38.474 "digest": "sha512", 00:21:38.474 "dhgroup": "ffdhe8192" 00:21:38.474 } 00:21:38.474 } 00:21:38.474 ]' 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.474 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.735 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.735 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.735 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.735 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.735 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.735 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.735 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MTMwZDJjODJkODQzZTM3YzQyYzE2MDdmMTA4Y2U1YTdmNjYwNGYxOGNiNDA4Y2YwRtf9YQ==: --dhchap-ctrl-secret DHHC-1:03:M2NhM2JhNWQxMTQ2YjdmYWM0MTE2MjJmNWY4ODlhOWQ3NDAyNjg1NDY1ODM3NjBkOTU3NWM0Yzc0YzIxYWZhY5eOCcw=: 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:39.677 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:40.249 request: 00:21:40.249 { 00:21:40.249 "name": "nvme0", 00:21:40.249 "trtype": "tcp", 00:21:40.249 "traddr": "10.0.0.2", 00:21:40.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:40.249 "adrfam": "ipv4", 00:21:40.249 "trsvcid": "4420", 00:21:40.249 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.249 "dhchap_key": "key2", 00:21:40.249 "method": "bdev_nvme_attach_controller", 00:21:40.249 "req_id": 1 00:21:40.249 } 00:21:40.249 Got JSON-RPC error response 00:21:40.249 response: 00:21:40.249 { 00:21:40.249 "code": -5, 00:21:40.249 "message": "Input/output error" 00:21:40.249 } 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.249 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.509 request: 00:21:40.509 { 00:21:40.509 "name": "nvme0", 00:21:40.509 "trtype": "tcp", 00:21:40.509 "traddr": "10.0.0.2", 00:21:40.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:40.510 "adrfam": "ipv4", 00:21:40.510 "trsvcid": "4420", 00:21:40.510 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.510 "dhchap_key": "key1", 00:21:40.510 "dhchap_ctrlr_key": "ckey2", 00:21:40.510 "method": "bdev_nvme_attach_controller", 00:21:40.510 "req_id": 1 00:21:40.510 } 00:21:40.510 Got JSON-RPC error response 00:21:40.510 response: 00:21:40.510 { 00:21:40.510 "code": -5, 00:21:40.510 "message": "Input/output error" 00:21:40.510 } 00:21:40.510 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:40.510 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:40.510 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:40.510 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:40.510 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.510 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.510 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.770 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.770 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:40.770 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.770 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.770 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.770 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.770 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:40.770 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.770 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:40.770 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:40.771 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:40.771 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:40.771 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.771 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.032 request: 00:21:41.032 { 00:21:41.032 "name": "nvme0", 00:21:41.032 "trtype": "tcp", 00:21:41.032 "traddr": "10.0.0.2", 00:21:41.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:41.032 "adrfam": "ipv4", 00:21:41.032 "trsvcid": "4420", 00:21:41.032 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.032 "dhchap_key": "key1", 00:21:41.032 "dhchap_ctrlr_key": "ckey1", 00:21:41.032 "method": "bdev_nvme_attach_controller", 00:21:41.032 "req_id": 1 00:21:41.032 } 00:21:41.032 Got JSON-RPC error response 00:21:41.032 response: 00:21:41.032 { 00:21:41.032 "code": -5, 00:21:41.032 "message": "Input/output error" 00:21:41.032 } 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 421376 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 421376 ']' 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 421376 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:41.032 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 421376 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 421376' 00:21:41.293 killing process with pid 421376 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 421376 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 421376 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=448007 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 448007 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 448007 ']' 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:41.293 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 448007 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 448007 ']' 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.237 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.498 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.070 00:21:43.070 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.070 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.070 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.070 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.070 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.070 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.070 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.070 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.070 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.070 { 00:21:43.070 "cntlid": 1, 00:21:43.070 "qid": 0, 00:21:43.070 "state": "enabled", 00:21:43.070 "listen_address": { 00:21:43.070 "trtype": "TCP", 00:21:43.070 "adrfam": "IPv4", 00:21:43.070 "traddr": "10.0.0.2", 00:21:43.070 "trsvcid": "4420" 00:21:43.070 }, 00:21:43.070 "peer_address": { 00:21:43.070 "trtype": "TCP", 00:21:43.070 "adrfam": "IPv4", 00:21:43.070 "traddr": "10.0.0.1", 00:21:43.070 "trsvcid": "37002" 00:21:43.070 }, 00:21:43.070 "auth": { 00:21:43.070 "state": "completed", 00:21:43.070 "digest": "sha512", 00:21:43.070 "dhgroup": "ffdhe8192" 00:21:43.070 } 00:21:43.070 } 00:21:43.070 ]' 00:21:43.070 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.330 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.330 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.330 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.330 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.330 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.330 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.330 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.330 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:OWFjNGVhNTZjMDMyZTEzMjA3NTE1Y2QzNzM3ZmZkOGMzNWY5NzY3MjNlMzA3N2RjMTFkYTUwY2IwZDA5ODljMnH//f8=: 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:44.272 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.273 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:44.273 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.273 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:44.273 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:44.273 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:44.273 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:44.273 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.273 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.533 request: 00:21:44.533 { 00:21:44.533 "name": "nvme0", 00:21:44.533 "trtype": "tcp", 00:21:44.533 "traddr": "10.0.0.2", 00:21:44.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:44.533 "adrfam": "ipv4", 00:21:44.533 "trsvcid": "4420", 00:21:44.533 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.533 "dhchap_key": "key3", 00:21:44.533 "method": "bdev_nvme_attach_controller", 00:21:44.533 "req_id": 1 00:21:44.533 } 00:21:44.533 Got JSON-RPC error response 00:21:44.533 response: 00:21:44.533 { 00:21:44.533 "code": -5, 00:21:44.533 "message": "Input/output error" 00:21:44.533 } 00:21:44.533 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:44.533 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:44.533 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:44.533 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:44.533 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:44.533 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:44.533 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:44.533 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:44.794 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.794 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:44.794 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.794 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:44.794 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:44.794 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:44.794 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:44.794 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.794 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.794 request: 00:21:44.794 { 00:21:44.794 "name": "nvme0", 00:21:44.794 "trtype": "tcp", 00:21:44.794 "traddr": "10.0.0.2", 00:21:44.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:44.794 "adrfam": "ipv4", 00:21:44.794 "trsvcid": "4420", 00:21:44.794 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.794 "dhchap_key": "key3", 00:21:44.794 "method": "bdev_nvme_attach_controller", 00:21:44.794 "req_id": 1 00:21:44.794 } 00:21:44.794 Got JSON-RPC error response 00:21:44.794 response: 00:21:44.794 { 00:21:44.794 "code": -5, 00:21:44.794 "message": "Input/output error" 00:21:44.794 } 00:21:44.794 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:44.794 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:44.794 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:44.794 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:44.794 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:44.794 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:44.794 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:44.794 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:44.794 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:44.794 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.054 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:45.315 request: 00:21:45.315 { 00:21:45.315 "name": "nvme0", 00:21:45.315 "trtype": "tcp", 00:21:45.315 "traddr": "10.0.0.2", 00:21:45.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:45.315 "adrfam": "ipv4", 00:21:45.315 "trsvcid": "4420", 00:21:45.315 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:45.315 "dhchap_key": "key0", 00:21:45.315 "dhchap_ctrlr_key": "key1", 00:21:45.315 "method": "bdev_nvme_attach_controller", 00:21:45.315 "req_id": 1 00:21:45.315 } 00:21:45.315 Got JSON-RPC error response 00:21:45.315 response: 00:21:45.315 { 00:21:45.315 "code": -5, 00:21:45.315 "message": "Input/output error" 00:21:45.315 } 00:21:45.316 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:45.316 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:45.316 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:45.316 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:45.316 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:45.316 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:45.316 00:21:45.316 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:45.316 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.316 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:45.576 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.576 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.576 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 421431 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 421431 ']' 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 421431 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 421431 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 421431' 00:21:45.838 killing process with pid 421431 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 421431 00:21:45.838 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 421431 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.100 rmmod nvme_tcp 00:21:46.100 rmmod nvme_fabrics 00:21:46.100 rmmod nvme_keyring 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 448007 ']' 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 448007 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 448007 ']' 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 448007 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 448007 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 448007' 00:21:46.100 killing process with pid 448007 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 448007 00:21:46.100 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 448007 00:21:46.362 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:46.362 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:46.362 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:46.362 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.362 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.362 00:47:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.362 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.362 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.277 00:47:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:48.277 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.SLB /tmp/spdk.key-sha256.Lyp /tmp/spdk.key-sha384.ClC /tmp/spdk.key-sha512.Hvt /tmp/spdk.key-sha512.XK4 /tmp/spdk.key-sha384.ZZ2 /tmp/spdk.key-sha256.uI0 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:48.277 00:21:48.277 real 2m23.136s 00:21:48.277 user 5m18.782s 00:21:48.277 sys 0m20.936s 00:21:48.277 00:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:48.277 00:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.277 ************************************ 00:21:48.277 END TEST nvmf_auth_target 00:21:48.277 ************************************ 00:21:48.277 00:47:06 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:48.277 00:47:06 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:48.277 00:47:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:21:48.277 00:47:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:48.277 00:47:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:48.538 ************************************ 00:21:48.538 START TEST nvmf_bdevio_no_huge 00:21:48.539 ************************************ 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:48.539 * Looking for test storage... 00:21:48.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.539 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.127 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:55.128 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:55.128 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:55.128 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:55.128 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.128 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:21:55.393 00:21:55.393 --- 10.0.0.2 ping statistics --- 00:21:55.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.393 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:21:55.393 00:21:55.393 --- 10.0.0.1 ping statistics --- 00:21:55.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.393 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=453058 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 453058 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 453058 ']' 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:55.393 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:55.393 [2024-06-08 00:47:13.649364] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:21:55.393 [2024-06-08 00:47:13.649436] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:55.694 [2024-06-08 00:47:13.743351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.694 [2024-06-08 00:47:13.851440] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.694 [2024-06-08 00:47:13.851493] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.694 [2024-06-08 00:47:13.851501] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.694 [2024-06-08 00:47:13.851508] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.694 [2024-06-08 00:47:13.851514] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.694 [2024-06-08 00:47:13.851697] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:21:55.694 [2024-06-08 00:47:13.851977] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:21:55.694 [2024-06-08 00:47:13.852140] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:21:55.694 [2024-06-08 00:47:13.852143] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:56.267 [2024-06-08 00:47:14.484655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:56.267 Malloc0 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:56.267 [2024-06-08 00:47:14.538321] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.267 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:56.268 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:56.268 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:56.268 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:56.268 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:56.268 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:56.268 { 00:21:56.268 "params": { 00:21:56.268 "name": "Nvme$subsystem", 00:21:56.268 "trtype": "$TEST_TRANSPORT", 00:21:56.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.268 "adrfam": "ipv4", 00:21:56.268 "trsvcid": "$NVMF_PORT", 00:21:56.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.268 "hdgst": ${hdgst:-false}, 00:21:56.268 "ddgst": ${ddgst:-false} 00:21:56.268 }, 00:21:56.268 "method": "bdev_nvme_attach_controller" 00:21:56.268 } 00:21:56.268 EOF 00:21:56.268 )") 00:21:56.268 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:56.529 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:56.529 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:56.529 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:56.529 "params": { 00:21:56.529 "name": "Nvme1", 00:21:56.529 "trtype": "tcp", 00:21:56.529 "traddr": "10.0.0.2", 00:21:56.529 "adrfam": "ipv4", 00:21:56.529 "trsvcid": "4420", 00:21:56.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:56.529 "hdgst": false, 00:21:56.529 "ddgst": false 00:21:56.529 }, 00:21:56.529 "method": "bdev_nvme_attach_controller" 00:21:56.529 }' 00:21:56.529 [2024-06-08 00:47:14.593813] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:21:56.529 [2024-06-08 00:47:14.593884] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid453104 ] 00:21:56.529 [2024-06-08 00:47:14.664694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:56.529 [2024-06-08 00:47:14.762682] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.529 [2024-06-08 00:47:14.762875] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.529 [2024-06-08 00:47:14.762879] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.101 I/O targets: 00:21:57.101 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:57.101 00:21:57.101 00:21:57.101 CUnit - A unit testing framework for C - Version 2.1-3 00:21:57.101 http://cunit.sourceforge.net/ 00:21:57.101 00:21:57.101 00:21:57.101 Suite: bdevio tests on: Nvme1n1 00:21:57.101 Test: blockdev write read block ...passed 00:21:57.101 Test: blockdev write zeroes read block ...passed 00:21:57.101 Test: blockdev write zeroes read no split ...passed 00:21:57.101 Test: blockdev write zeroes read split ...passed 00:21:57.101 Test: blockdev write zeroes read split partial ...passed 00:21:57.101 Test: blockdev reset ...[2024-06-08 00:47:15.300768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:57.101 [2024-06-08 00:47:15.300822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b3650 (9): Bad file descriptor 00:21:57.362 [2024-06-08 00:47:15.408322] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:57.362 passed 00:21:57.362 Test: blockdev write read 8 blocks ...passed 00:21:57.362 Test: blockdev write read size > 128k ...passed 00:21:57.362 Test: blockdev write read invalid size ...passed 00:21:57.362 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:57.362 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:57.362 Test: blockdev write read max offset ...passed 00:21:57.362 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:57.362 Test: blockdev writev readv 8 blocks ...passed 00:21:57.362 Test: blockdev writev readv 30 x 1block ...passed 00:21:57.362 Test: blockdev writev readv block ...passed 00:21:57.362 Test: blockdev writev readv size > 128k ...passed 00:21:57.362 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:57.362 Test: blockdev comparev and writev ...[2024-06-08 00:47:15.596421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:57.362 [2024-06-08 00:47:15.596445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:57.362 [2024-06-08 00:47:15.596456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:57.362 [2024-06-08 00:47:15.596462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.362 [2024-06-08 00:47:15.597005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:57.362 [2024-06-08 00:47:15.597013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:57.362 [2024-06-08 00:47:15.597022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:57.362 [2024-06-08 00:47:15.597027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:57.362 [2024-06-08 00:47:15.597566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:57.362 [2024-06-08 00:47:15.597573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:57.362 [2024-06-08 00:47:15.597583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:57.362 [2024-06-08 00:47:15.597591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:57.362 [2024-06-08 00:47:15.598147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:57.362 [2024-06-08 00:47:15.598155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:57.362 [2024-06-08 00:47:15.598165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:57.362 [2024-06-08 00:47:15.598170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:57.362 passed 00:21:57.623 Test: blockdev nvme passthru rw ...passed 00:21:57.623 Test: blockdev nvme passthru vendor specific ...[2024-06-08 00:47:15.683412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:57.623 [2024-06-08 00:47:15.683422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:57.623 [2024-06-08 00:47:15.683833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:57.623 [2024-06-08 00:47:15.683840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:57.623 [2024-06-08 00:47:15.684273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:57.623 [2024-06-08 00:47:15.684280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:57.623 [2024-06-08 00:47:15.684690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:57.623 [2024-06-08 00:47:15.684698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:57.623 passed 00:21:57.623 Test: blockdev nvme admin passthru ...passed 00:21:57.623 Test: blockdev copy ...passed 00:21:57.623 00:21:57.623 Run Summary: Type Total Ran Passed Failed Inactive 00:21:57.623 suites 1 1 n/a 0 0 00:21:57.623 tests 23 23 23 0 0 00:21:57.623 asserts 152 152 152 0 n/a 00:21:57.623 00:21:57.623 Elapsed time = 1.278 seconds 00:21:57.884 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:57.885 rmmod nvme_tcp 00:21:57.885 rmmod nvme_fabrics 00:21:57.885 rmmod nvme_keyring 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 453058 ']' 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 453058 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 453058 ']' 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 453058 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 453058 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 453058' 00:21:57.885 killing process with pid 453058 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 453058 00:21:57.885 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 453058 00:21:58.458 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.458 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:58.458 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:58.458 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.458 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.458 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.458 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.458 00:47:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.370 00:47:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:00.370 00:22:00.370 real 0m12.030s 00:22:00.370 user 0m14.777s 00:22:00.370 sys 0m6.223s 00:22:00.370 00:47:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:00.370 00:47:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:00.370 ************************************ 00:22:00.370 END TEST nvmf_bdevio_no_huge 00:22:00.370 ************************************ 00:22:00.370 00:47:18 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:00.370 00:47:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:00.370 00:47:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:00.370 00:47:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.631 ************************************ 00:22:00.631 START TEST nvmf_tls 00:22:00.631 ************************************ 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:00.631 * Looking for test storage... 00:22:00.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.631 00:47:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.775 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:08.776 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:08.776 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:08.776 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:08.776 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:08.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:22:08.776 00:22:08.776 --- 10.0.0.2 ping statistics --- 00:22:08.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.776 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:22:08.776 00:22:08.776 --- 10.0.0.1 ping statistics --- 00:22:08.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.776 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=457733 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 457733 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 457733 ']' 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.776 00:47:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:08.776 00:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.776 [2024-06-08 00:47:26.046399] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:08.776 [2024-06-08 00:47:26.046458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.776 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.777 [2024-06-08 00:47:26.130902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.777 [2024-06-08 00:47:26.210225] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.777 [2024-06-08 00:47:26.210285] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.777 [2024-06-08 00:47:26.210293] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.777 [2024-06-08 00:47:26.210300] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.777 [2024-06-08 00:47:26.210312] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.777 [2024-06-08 00:47:26.210339] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.777 00:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:08.777 00:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:08.777 00:47:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:08.777 00:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:08.777 00:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.777 00:47:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.777 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:08.777 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:08.777 true 00:22:08.777 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:08.777 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:09.038 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:09.038 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:09.039 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:09.300 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:09.300 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:09.300 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:09.300 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:09.300 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:09.562 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:09.562 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:09.823 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:09.823 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:09.823 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:09.823 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:09.823 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:09.823 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:09.823 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:10.084 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:10.084 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:10.345 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:10.345 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:10.345 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:10.345 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:10.345 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.0EoXo58myd 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.2KgPmZeUE3 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.0EoXo58myd 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.2KgPmZeUE3 00:22:10.607 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:10.868 00:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:11.129 00:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.0EoXo58myd 00:22:11.129 00:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0EoXo58myd 00:22:11.129 00:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:11.389 [2024-06-08 00:47:29.433099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.389 00:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:11.389 00:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:11.649 [2024-06-08 00:47:29.725803] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:11.649 [2024-06-08 00:47:29.725974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.649 00:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:11.649 malloc0 00:22:11.649 00:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:11.909 00:47:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0EoXo58myd 00:22:11.909 [2024-06-08 00:47:30.188995] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:12.169 00:47:30 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.0EoXo58myd 00:22:12.169 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.174 Initializing NVMe Controllers 00:22:22.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:22.174 Initialization complete. Launching workers. 00:22:22.175 ======================================================== 00:22:22.175 Latency(us) 00:22:22.175 Device Information : IOPS MiB/s Average min max 00:22:22.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19176.68 74.91 3337.37 1159.22 4195.21 00:22:22.175 ======================================================== 00:22:22.175 Total : 19176.68 74.91 3337.37 1159.22 4195.21 00:22:22.175 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0EoXo58myd 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0EoXo58myd' 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=460473 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 460473 /var/tmp/bdevperf.sock 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 460473 ']' 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:22.175 00:47:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.175 [2024-06-08 00:47:40.365383] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:22.175 [2024-06-08 00:47:40.365440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460473 ] 00:22:22.175 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.175 [2024-06-08 00:47:40.414179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.439 [2024-06-08 00:47:40.466789] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.072 00:47:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:23.072 00:47:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:23.072 00:47:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0EoXo58myd 00:22:23.072 [2024-06-08 00:47:41.279465] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.072 [2024-06-08 00:47:41.279519] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:23.333 TLSTESTn1 00:22:23.333 00:47:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:23.333 Running I/O for 10 seconds... 00:22:33.332 00:22:33.332 Latency(us) 00:22:33.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.332 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:33.332 Verification LBA range: start 0x0 length 0x2000 00:22:33.332 TLSTESTn1 : 10.06 2793.62 10.91 0.00 0.00 45673.00 5816.32 61166.93 00:22:33.332 =================================================================================================================== 00:22:33.332 Total : 2793.62 10.91 0.00 0.00 45673.00 5816.32 61166.93 00:22:33.332 0 00:22:33.332 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.332 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 460473 00:22:33.332 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 460473 ']' 00:22:33.332 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 460473 00:22:33.332 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:33.332 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:33.332 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 460473 00:22:33.593 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 460473' 00:22:33.594 killing process with pid 460473 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 460473 00:22:33.594 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.594 00:22:33.594 Latency(us) 00:22:33.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.594 =================================================================================================================== 00:22:33.594 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.594 [2024-06-08 00:47:51.630039] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 460473 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2KgPmZeUE3 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2KgPmZeUE3 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2KgPmZeUE3 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2KgPmZeUE3' 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=462627 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 462627 /var/tmp/bdevperf.sock 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 462627 ']' 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:33.594 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.594 [2024-06-08 00:47:51.793905] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:33.594 [2024-06-08 00:47:51.793958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462627 ] 00:22:33.594 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.594 [2024-06-08 00:47:51.842681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.855 [2024-06-08 00:47:51.894803] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.427 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:34.427 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:34.427 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2KgPmZeUE3 00:22:34.688 [2024-06-08 00:47:52.711870] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.688 [2024-06-08 00:47:52.711925] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:34.688 [2024-06-08 00:47:52.723604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:34.688 [2024-06-08 00:47:52.724094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0960 (107): Transport endpoint is not connected 00:22:34.688 [2024-06-08 00:47:52.725091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0960 (9): Bad file descriptor 00:22:34.688 [2024-06-08 00:47:52.726092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:34.688 [2024-06-08 00:47:52.726098] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:34.688 [2024-06-08 00:47:52.726105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:34.688 request: 00:22:34.688 { 00:22:34.688 "name": "TLSTEST", 00:22:34.688 "trtype": "tcp", 00:22:34.688 "traddr": "10.0.0.2", 00:22:34.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.688 "adrfam": "ipv4", 00:22:34.688 "trsvcid": "4420", 00:22:34.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.688 "psk": "/tmp/tmp.2KgPmZeUE3", 00:22:34.688 "method": "bdev_nvme_attach_controller", 00:22:34.688 "req_id": 1 00:22:34.688 } 00:22:34.688 Got JSON-RPC error response 00:22:34.688 response: 00:22:34.688 { 00:22:34.688 "code": -5, 00:22:34.688 "message": "Input/output error" 00:22:34.688 } 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 462627 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 462627 ']' 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 462627 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 462627 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 462627' 00:22:34.688 killing process with pid 462627 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 462627 00:22:34.688 Received shutdown signal, test time was about 10.000000 seconds 00:22:34.688 00:22:34.688 Latency(us) 00:22:34.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.688 =================================================================================================================== 00:22:34.688 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:34.688 [2024-06-08 00:47:52.811398] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 462627 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:34.688 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0EoXo58myd 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0EoXo58myd 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0EoXo58myd 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0EoXo58myd' 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=462837 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 462837 /var/tmp/bdevperf.sock 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 462837 ']' 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:34.689 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.689 [2024-06-08 00:47:52.966955] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:34.689 [2024-06-08 00:47:52.967006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462837 ] 00:22:34.949 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.949 [2024-06-08 00:47:53.017323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.949 [2024-06-08 00:47:53.067774] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.520 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:35.520 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:35.520 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.0EoXo58myd 00:22:35.781 [2024-06-08 00:47:53.876651] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.781 [2024-06-08 00:47:53.876714] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:35.781 [2024-06-08 00:47:53.886347] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:35.781 [2024-06-08 00:47:53.886366] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:35.781 [2024-06-08 00:47:53.886387] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:35.781 [2024-06-08 00:47:53.886899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec6960 (107): Transport endpoint is not connected 00:22:35.781 [2024-06-08 00:47:53.887894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec6960 (9): Bad file descriptor 00:22:35.781 [2024-06-08 00:47:53.888896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:35.781 [2024-06-08 00:47:53.888903] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:35.781 [2024-06-08 00:47:53.888910] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:35.781 request: 00:22:35.781 { 00:22:35.781 "name": "TLSTEST", 00:22:35.781 "trtype": "tcp", 00:22:35.781 "traddr": "10.0.0.2", 00:22:35.781 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:35.781 "adrfam": "ipv4", 00:22:35.781 "trsvcid": "4420", 00:22:35.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.781 "psk": "/tmp/tmp.0EoXo58myd", 00:22:35.781 "method": "bdev_nvme_attach_controller", 00:22:35.781 "req_id": 1 00:22:35.781 } 00:22:35.781 Got JSON-RPC error response 00:22:35.781 response: 00:22:35.781 { 00:22:35.781 "code": -5, 00:22:35.781 "message": "Input/output error" 00:22:35.781 } 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 462837 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 462837 ']' 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 462837 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 462837 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 462837' 00:22:35.781 killing process with pid 462837 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 462837 00:22:35.781 Received shutdown signal, test time was about 10.000000 seconds 00:22:35.781 00:22:35.781 Latency(us) 00:22:35.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.781 =================================================================================================================== 00:22:35.781 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:35.781 [2024-06-08 00:47:53.974975] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:35.781 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 462837 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0EoXo58myd 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0EoXo58myd 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0EoXo58myd 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0EoXo58myd' 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=463177 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 463177 /var/tmp/bdevperf.sock 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 463177 ']' 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:36.043 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.043 [2024-06-08 00:47:54.130232] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:36.043 [2024-06-08 00:47:54.130288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463177 ] 00:22:36.043 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.043 [2024-06-08 00:47:54.180026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.043 [2024-06-08 00:47:54.231476] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.986 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:36.986 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:36.986 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0EoXo58myd 00:22:36.986 [2024-06-08 00:47:55.048232] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.986 [2024-06-08 00:47:55.048297] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:36.986 [2024-06-08 00:47:55.059208] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:36.986 [2024-06-08 00:47:55.059226] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:36.986 [2024-06-08 00:47:55.059245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:36.987 [2024-06-08 00:47:55.060358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd5960 (107): Transport endpoint is not connected 00:22:36.987 [2024-06-08 00:47:55.061353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd5960 (9): Bad file descriptor 00:22:36.987 [2024-06-08 00:47:55.062354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:36.987 [2024-06-08 00:47:55.062360] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:36.987 [2024-06-08 00:47:55.062367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:36.987 request: 00:22:36.987 { 00:22:36.987 "name": "TLSTEST", 00:22:36.987 "trtype": "tcp", 00:22:36.987 "traddr": "10.0.0.2", 00:22:36.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.987 "adrfam": "ipv4", 00:22:36.987 "trsvcid": "4420", 00:22:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:36.987 "psk": "/tmp/tmp.0EoXo58myd", 00:22:36.987 "method": "bdev_nvme_attach_controller", 00:22:36.987 "req_id": 1 00:22:36.987 } 00:22:36.987 Got JSON-RPC error response 00:22:36.987 response: 00:22:36.987 { 00:22:36.987 "code": -5, 00:22:36.987 "message": "Input/output error" 00:22:36.987 } 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 463177 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 463177 ']' 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 463177 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 463177 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 463177' 00:22:36.987 killing process with pid 463177 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 463177 00:22:36.987 Received shutdown signal, test time was about 10.000000 seconds 00:22:36.987 00:22:36.987 Latency(us) 00:22:36.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.987 =================================================================================================================== 00:22:36.987 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:36.987 [2024-06-08 00:47:55.146735] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 463177 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=463346 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 463346 /var/tmp/bdevperf.sock 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 463346 ']' 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:36.987 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.248 [2024-06-08 00:47:55.300121] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:37.248 [2024-06-08 00:47:55.300174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463346 ] 00:22:37.248 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.248 [2024-06-08 00:47:55.350722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.248 [2024-06-08 00:47:55.401606] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.819 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:37.819 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:37.819 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:38.081 [2024-06-08 00:47:56.222071] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:38.081 [2024-06-08 00:47:56.223205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde6330 (9): Bad file descriptor 00:22:38.081 [2024-06-08 00:47:56.224204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:38.081 [2024-06-08 00:47:56.224211] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:38.081 [2024-06-08 00:47:56.224217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:38.081 request: 00:22:38.081 { 00:22:38.081 "name": "TLSTEST", 00:22:38.081 "trtype": "tcp", 00:22:38.081 "traddr": "10.0.0.2", 00:22:38.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.081 "adrfam": "ipv4", 00:22:38.081 "trsvcid": "4420", 00:22:38.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.081 "method": "bdev_nvme_attach_controller", 00:22:38.081 "req_id": 1 00:22:38.081 } 00:22:38.081 Got JSON-RPC error response 00:22:38.081 response: 00:22:38.081 { 00:22:38.081 "code": -5, 00:22:38.081 "message": "Input/output error" 00:22:38.081 } 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 463346 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 463346 ']' 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 463346 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 463346 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 463346' 00:22:38.081 killing process with pid 463346 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 463346 00:22:38.081 Received shutdown signal, test time was about 10.000000 seconds 00:22:38.081 00:22:38.081 Latency(us) 00:22:38.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.081 =================================================================================================================== 00:22:38.081 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:38.081 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 463346 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 457733 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 457733 ']' 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 457733 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 457733 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 457733' 00:22:38.342 killing process with pid 457733 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 457733 00:22:38.342 [2024-06-08 00:47:56.467124] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 457733 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:38.342 00:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.JxAeA3gswv 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.JxAeA3gswv 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=463556 00:22:38.603 00:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 463556 00:22:38.604 00:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:38.604 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 463556 ']' 00:22:38.604 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.604 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:38.604 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.604 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:38.604 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.604 [2024-06-08 00:47:56.698355] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:38.604 [2024-06-08 00:47:56.698426] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.604 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.604 [2024-06-08 00:47:56.782787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.604 [2024-06-08 00:47:56.844517] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.604 [2024-06-08 00:47:56.844551] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.604 [2024-06-08 00:47:56.844557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.604 [2024-06-08 00:47:56.844562] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.604 [2024-06-08 00:47:56.844566] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.604 [2024-06-08 00:47:56.844588] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.JxAeA3gswv 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JxAeA3gswv 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:39.547 [2024-06-08 00:47:57.644596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:39.547 00:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:39.807 [2024-06-08 00:47:57.941309] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.807 [2024-06-08 00:47:57.941483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.807 00:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:39.807 malloc0 00:22:40.069 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:40.069 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JxAeA3gswv 00:22:40.329 [2024-06-08 00:47:58.364114] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:40.329 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JxAeA3gswv 00:22:40.329 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JxAeA3gswv' 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=463913 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 463913 /var/tmp/bdevperf.sock 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 463913 ']' 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.330 [2024-06-08 00:47:58.408961] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:40.330 [2024-06-08 00:47:58.409010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463913 ] 00:22:40.330 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.330 [2024-06-08 00:47:58.459354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.330 [2024-06-08 00:47:58.511879] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:40.330 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JxAeA3gswv 00:22:40.590 [2024-06-08 00:47:58.731443] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:40.590 [2024-06-08 00:47:58.731499] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:40.590 TLSTESTn1 00:22:40.590 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:40.851 Running I/O for 10 seconds... 00:22:50.884 00:22:50.884 Latency(us) 00:22:50.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.884 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:50.884 Verification LBA range: start 0x0 length 0x2000 00:22:50.884 TLSTESTn1 : 10.03 2784.59 10.88 0.00 0.00 45894.81 6280.53 123207.68 00:22:50.884 =================================================================================================================== 00:22:50.884 Total : 2784.59 10.88 0.00 0.00 45894.81 6280.53 123207.68 00:22:50.884 0 00:22:50.884 00:48:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.884 00:48:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 463913 00:22:50.884 00:48:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 463913 ']' 00:22:50.884 00:48:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 463913 00:22:50.884 00:48:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:50.884 00:48:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:50.884 00:48:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 463913 00:22:50.884 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:50.884 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:50.884 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 463913' 00:22:50.884 killing process with pid 463913 00:22:50.884 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 463913 00:22:50.884 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.884 00:22:50.884 Latency(us) 00:22:50.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.884 =================================================================================================================== 00:22:50.884 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.884 [2024-06-08 00:48:09.039645] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:50.884 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 463913 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.JxAeA3gswv 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JxAeA3gswv 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JxAeA3gswv 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JxAeA3gswv 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JxAeA3gswv' 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=466207 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 466207 /var/tmp/bdevperf.sock 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 466207 ']' 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:51.167 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.167 [2024-06-08 00:48:09.207833] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:51.167 [2024-06-08 00:48:09.207887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466207 ] 00:22:51.167 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.167 [2024-06-08 00:48:09.257915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.167 [2024-06-08 00:48:09.310046] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.738 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:51.738 00:48:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:51.738 00:48:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JxAeA3gswv 00:22:51.998 [2024-06-08 00:48:10.123374] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.998 [2024-06-08 00:48:10.123428] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:51.998 [2024-06-08 00:48:10.123434] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.JxAeA3gswv 00:22:51.998 request: 00:22:51.998 { 00:22:51.998 "name": "TLSTEST", 00:22:51.998 "trtype": "tcp", 00:22:51.998 "traddr": "10.0.0.2", 00:22:51.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.998 "adrfam": "ipv4", 00:22:51.998 "trsvcid": "4420", 00:22:51.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.998 "psk": "/tmp/tmp.JxAeA3gswv", 00:22:51.998 "method": "bdev_nvme_attach_controller", 00:22:51.998 "req_id": 1 00:22:51.998 } 00:22:51.998 Got JSON-RPC error response 00:22:51.998 response: 00:22:51.998 { 00:22:51.998 "code": -1, 00:22:51.998 "message": "Operation not permitted" 00:22:51.998 } 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 466207 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 466207 ']' 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 466207 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 466207 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 466207' 00:22:51.998 killing process with pid 466207 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 466207 00:22:51.998 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.998 00:22:51.998 Latency(us) 00:22:51.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.998 =================================================================================================================== 00:22:51.998 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.998 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 466207 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 463556 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 463556 ']' 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 463556 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 463556 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 463556' 00:22:52.259 killing process with pid 463556 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 463556 00:22:52.259 [2024-06-08 00:48:10.371284] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 463556 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=466721 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 466721 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 466721 ']' 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:52.259 00:48:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.520 [2024-06-08 00:48:10.548081] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:52.520 [2024-06-08 00:48:10.548133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.520 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.520 [2024-06-08 00:48:10.630953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.520 [2024-06-08 00:48:10.687996] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.520 [2024-06-08 00:48:10.688029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.520 [2024-06-08 00:48:10.688034] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.520 [2024-06-08 00:48:10.688039] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.520 [2024-06-08 00:48:10.688044] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.520 [2024-06-08 00:48:10.688061] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.JxAeA3gswv 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JxAeA3gswv 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.JxAeA3gswv 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JxAeA3gswv 00:22:53.090 00:48:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:53.350 [2024-06-08 00:48:11.491498] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.350 00:48:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:53.610 00:48:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:53.610 [2024-06-08 00:48:11.788225] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:53.610 [2024-06-08 00:48:11.788388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.610 00:48:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:53.870 malloc0 00:22:53.870 00:48:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:53.870 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JxAeA3gswv 00:22:54.130 [2024-06-08 00:48:12.219093] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:54.130 [2024-06-08 00:48:12.219111] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:54.130 [2024-06-08 00:48:12.219131] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:54.130 request: 00:22:54.130 { 00:22:54.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.130 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.130 "psk": "/tmp/tmp.JxAeA3gswv", 00:22:54.130 "method": "nvmf_subsystem_add_host", 00:22:54.130 "req_id": 1 00:22:54.130 } 00:22:54.130 Got JSON-RPC error response 00:22:54.130 response: 00:22:54.130 { 00:22:54.130 "code": -32603, 00:22:54.130 "message": "Internal error" 00:22:54.130 } 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 466721 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 466721 ']' 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 466721 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 466721 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 466721' 00:22:54.130 killing process with pid 466721 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 466721 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 466721 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.JxAeA3gswv 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:54.130 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.391 00:48:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=467224 00:22:54.391 00:48:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 467224 00:22:54.391 00:48:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:54.391 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 467224 ']' 00:22:54.391 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.391 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:54.391 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.391 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:54.391 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.391 [2024-06-08 00:48:12.471473] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:54.391 [2024-06-08 00:48:12.471530] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.391 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.391 [2024-06-08 00:48:12.553390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.391 [2024-06-08 00:48:12.606950] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.391 [2024-06-08 00:48:12.606979] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.391 [2024-06-08 00:48:12.606984] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.391 [2024-06-08 00:48:12.606989] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.391 [2024-06-08 00:48:12.606993] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.391 [2024-06-08 00:48:12.607010] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.967 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:54.967 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:54.967 00:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.967 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:54.967 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.234 00:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.234 00:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.JxAeA3gswv 00:22:55.234 00:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JxAeA3gswv 00:22:55.234 00:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.234 [2024-06-08 00:48:13.404601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.234 00:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.495 00:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.495 [2024-06-08 00:48:13.697346] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.495 [2024-06-08 00:48:13.697518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.495 00:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:55.756 malloc0 00:22:55.756 00:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:55.756 00:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JxAeA3gswv 00:22:56.016 [2024-06-08 00:48:14.124130] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:56.016 00:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=467586 00:22:56.016 00:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.016 00:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.016 00:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 467586 /var/tmp/bdevperf.sock 00:22:56.016 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 467586 ']' 00:22:56.016 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.016 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:56.016 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.016 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:56.016 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.016 [2024-06-08 00:48:14.191621] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:56.016 [2024-06-08 00:48:14.191681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467586 ] 00:22:56.016 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.016 [2024-06-08 00:48:14.242657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.016 [2024-06-08 00:48:14.294961] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.959 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:56.959 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:56.959 00:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JxAeA3gswv 00:22:56.959 [2024-06-08 00:48:15.103864] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.959 [2024-06-08 00:48:15.103922] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:56.959 TLSTESTn1 00:22:56.959 00:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:57.221 00:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:57.221 "subsystems": [ 00:22:57.221 { 00:22:57.221 "subsystem": "keyring", 00:22:57.221 "config": [] 00:22:57.221 }, 00:22:57.221 { 00:22:57.221 "subsystem": "iobuf", 00:22:57.221 "config": [ 00:22:57.221 { 00:22:57.221 "method": "iobuf_set_options", 00:22:57.221 "params": { 00:22:57.221 "small_pool_count": 8192, 00:22:57.221 "large_pool_count": 1024, 00:22:57.221 "small_bufsize": 8192, 00:22:57.221 "large_bufsize": 135168 00:22:57.221 } 00:22:57.221 } 00:22:57.221 ] 00:22:57.221 }, 00:22:57.221 { 00:22:57.221 "subsystem": "sock", 00:22:57.221 "config": [ 00:22:57.221 { 00:22:57.221 "method": "sock_set_default_impl", 00:22:57.221 "params": { 00:22:57.221 "impl_name": "posix" 00:22:57.221 } 00:22:57.221 }, 00:22:57.221 { 00:22:57.221 "method": "sock_impl_set_options", 00:22:57.222 "params": { 00:22:57.222 "impl_name": "ssl", 00:22:57.222 "recv_buf_size": 4096, 00:22:57.222 "send_buf_size": 4096, 00:22:57.222 "enable_recv_pipe": true, 00:22:57.222 "enable_quickack": false, 00:22:57.222 "enable_placement_id": 0, 00:22:57.222 "enable_zerocopy_send_server": true, 00:22:57.222 "enable_zerocopy_send_client": false, 00:22:57.222 "zerocopy_threshold": 0, 00:22:57.222 "tls_version": 0, 00:22:57.222 "enable_ktls": false 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "sock_impl_set_options", 00:22:57.222 "params": { 00:22:57.222 "impl_name": "posix", 00:22:57.222 "recv_buf_size": 2097152, 00:22:57.222 "send_buf_size": 2097152, 00:22:57.222 "enable_recv_pipe": true, 00:22:57.222 "enable_quickack": false, 00:22:57.222 "enable_placement_id": 0, 00:22:57.222 "enable_zerocopy_send_server": true, 00:22:57.222 "enable_zerocopy_send_client": false, 00:22:57.222 "zerocopy_threshold": 0, 00:22:57.222 "tls_version": 0, 00:22:57.222 "enable_ktls": false 00:22:57.222 } 00:22:57.222 } 00:22:57.222 ] 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "subsystem": "vmd", 00:22:57.222 "config": [] 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "subsystem": "accel", 00:22:57.222 "config": [ 00:22:57.222 { 00:22:57.222 "method": "accel_set_options", 00:22:57.222 "params": { 00:22:57.222 "small_cache_size": 128, 00:22:57.222 "large_cache_size": 16, 00:22:57.222 "task_count": 2048, 00:22:57.222 "sequence_count": 2048, 00:22:57.222 "buf_count": 2048 00:22:57.222 } 00:22:57.222 } 00:22:57.222 ] 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "subsystem": "bdev", 00:22:57.222 "config": [ 00:22:57.222 { 00:22:57.222 "method": "bdev_set_options", 00:22:57.222 "params": { 00:22:57.222 "bdev_io_pool_size": 65535, 00:22:57.222 "bdev_io_cache_size": 256, 00:22:57.222 "bdev_auto_examine": true, 00:22:57.222 "iobuf_small_cache_size": 128, 00:22:57.222 "iobuf_large_cache_size": 16 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "bdev_raid_set_options", 00:22:57.222 "params": { 00:22:57.222 "process_window_size_kb": 1024 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "bdev_iscsi_set_options", 00:22:57.222 "params": { 00:22:57.222 "timeout_sec": 30 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "bdev_nvme_set_options", 00:22:57.222 "params": { 00:22:57.222 "action_on_timeout": "none", 00:22:57.222 "timeout_us": 0, 00:22:57.222 "timeout_admin_us": 0, 00:22:57.222 "keep_alive_timeout_ms": 10000, 00:22:57.222 "arbitration_burst": 0, 00:22:57.222 "low_priority_weight": 0, 00:22:57.222 "medium_priority_weight": 0, 00:22:57.222 "high_priority_weight": 0, 00:22:57.222 "nvme_adminq_poll_period_us": 10000, 00:22:57.222 "nvme_ioq_poll_period_us": 0, 00:22:57.222 "io_queue_requests": 0, 00:22:57.222 "delay_cmd_submit": true, 00:22:57.222 "transport_retry_count": 4, 00:22:57.222 "bdev_retry_count": 3, 00:22:57.222 "transport_ack_timeout": 0, 00:22:57.222 "ctrlr_loss_timeout_sec": 0, 00:22:57.222 "reconnect_delay_sec": 0, 00:22:57.222 "fast_io_fail_timeout_sec": 0, 00:22:57.222 "disable_auto_failback": false, 00:22:57.222 "generate_uuids": false, 00:22:57.222 "transport_tos": 0, 00:22:57.222 "nvme_error_stat": false, 00:22:57.222 "rdma_srq_size": 0, 00:22:57.222 "io_path_stat": false, 00:22:57.222 "allow_accel_sequence": false, 00:22:57.222 "rdma_max_cq_size": 0, 00:22:57.222 "rdma_cm_event_timeout_ms": 0, 00:22:57.222 "dhchap_digests": [ 00:22:57.222 "sha256", 00:22:57.222 "sha384", 00:22:57.222 "sha512" 00:22:57.222 ], 00:22:57.222 "dhchap_dhgroups": [ 00:22:57.222 "null", 00:22:57.222 "ffdhe2048", 00:22:57.222 "ffdhe3072", 00:22:57.222 "ffdhe4096", 00:22:57.222 "ffdhe6144", 00:22:57.222 "ffdhe8192" 00:22:57.222 ] 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "bdev_nvme_set_hotplug", 00:22:57.222 "params": { 00:22:57.222 "period_us": 100000, 00:22:57.222 "enable": false 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "bdev_malloc_create", 00:22:57.222 "params": { 00:22:57.222 "name": "malloc0", 00:22:57.222 "num_blocks": 8192, 00:22:57.222 "block_size": 4096, 00:22:57.222 "physical_block_size": 4096, 00:22:57.222 "uuid": "c9495c22-dcd3-48ba-b257-7c3d0ded0ad6", 00:22:57.222 "optimal_io_boundary": 0 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "bdev_wait_for_examine" 00:22:57.222 } 00:22:57.222 ] 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "subsystem": "nbd", 00:22:57.222 "config": [] 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "subsystem": "scheduler", 00:22:57.222 "config": [ 00:22:57.222 { 00:22:57.222 "method": "framework_set_scheduler", 00:22:57.222 "params": { 00:22:57.222 "name": "static" 00:22:57.222 } 00:22:57.222 } 00:22:57.222 ] 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "subsystem": "nvmf", 00:22:57.222 "config": [ 00:22:57.222 { 00:22:57.222 "method": "nvmf_set_config", 00:22:57.222 "params": { 00:22:57.222 "discovery_filter": "match_any", 00:22:57.222 "admin_cmd_passthru": { 00:22:57.222 "identify_ctrlr": false 00:22:57.222 } 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "nvmf_set_max_subsystems", 00:22:57.222 "params": { 00:22:57.222 "max_subsystems": 1024 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "nvmf_set_crdt", 00:22:57.222 "params": { 00:22:57.222 "crdt1": 0, 00:22:57.222 "crdt2": 0, 00:22:57.222 "crdt3": 0 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "nvmf_create_transport", 00:22:57.222 "params": { 00:22:57.222 "trtype": "TCP", 00:22:57.222 "max_queue_depth": 128, 00:22:57.222 "max_io_qpairs_per_ctrlr": 127, 00:22:57.222 "in_capsule_data_size": 4096, 00:22:57.222 "max_io_size": 131072, 00:22:57.222 "io_unit_size": 131072, 00:22:57.222 "max_aq_depth": 128, 00:22:57.222 "num_shared_buffers": 511, 00:22:57.222 "buf_cache_size": 4294967295, 00:22:57.222 "dif_insert_or_strip": false, 00:22:57.222 "zcopy": false, 00:22:57.222 "c2h_success": false, 00:22:57.222 "sock_priority": 0, 00:22:57.222 "abort_timeout_sec": 1, 00:22:57.222 "ack_timeout": 0, 00:22:57.222 "data_wr_pool_size": 0 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "nvmf_create_subsystem", 00:22:57.222 "params": { 00:22:57.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.222 "allow_any_host": false, 00:22:57.222 "serial_number": "SPDK00000000000001", 00:22:57.222 "model_number": "SPDK bdev Controller", 00:22:57.222 "max_namespaces": 10, 00:22:57.222 "min_cntlid": 1, 00:22:57.222 "max_cntlid": 65519, 00:22:57.222 "ana_reporting": false 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "nvmf_subsystem_add_host", 00:22:57.222 "params": { 00:22:57.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.222 "host": "nqn.2016-06.io.spdk:host1", 00:22:57.222 "psk": "/tmp/tmp.JxAeA3gswv" 00:22:57.222 } 00:22:57.222 }, 00:22:57.222 { 00:22:57.222 "method": "nvmf_subsystem_add_ns", 00:22:57.222 "params": { 00:22:57.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.222 "namespace": { 00:22:57.222 "nsid": 1, 00:22:57.222 "bdev_name": "malloc0", 00:22:57.222 "nguid": "C9495C22DCD348BAB2577C3D0DED0AD6", 00:22:57.222 "uuid": "c9495c22-dcd3-48ba-b257-7c3d0ded0ad6", 00:22:57.222 "no_auto_visible": false 00:22:57.222 } 00:22:57.223 } 00:22:57.223 }, 00:22:57.223 { 00:22:57.223 "method": "nvmf_subsystem_add_listener", 00:22:57.223 "params": { 00:22:57.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.223 "listen_address": { 00:22:57.223 "trtype": "TCP", 00:22:57.223 "adrfam": "IPv4", 00:22:57.223 "traddr": "10.0.0.2", 00:22:57.223 "trsvcid": "4420" 00:22:57.223 }, 00:22:57.223 "secure_channel": true 00:22:57.223 } 00:22:57.223 } 00:22:57.223 ] 00:22:57.223 } 00:22:57.223 ] 00:22:57.223 }' 00:22:57.223 00:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:57.484 00:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:57.484 "subsystems": [ 00:22:57.484 { 00:22:57.484 "subsystem": "keyring", 00:22:57.484 "config": [] 00:22:57.484 }, 00:22:57.484 { 00:22:57.484 "subsystem": "iobuf", 00:22:57.484 "config": [ 00:22:57.484 { 00:22:57.484 "method": "iobuf_set_options", 00:22:57.484 "params": { 00:22:57.484 "small_pool_count": 8192, 00:22:57.484 "large_pool_count": 1024, 00:22:57.484 "small_bufsize": 8192, 00:22:57.484 "large_bufsize": 135168 00:22:57.484 } 00:22:57.484 } 00:22:57.484 ] 00:22:57.484 }, 00:22:57.484 { 00:22:57.484 "subsystem": "sock", 00:22:57.484 "config": [ 00:22:57.484 { 00:22:57.484 "method": "sock_set_default_impl", 00:22:57.484 "params": { 00:22:57.484 "impl_name": "posix" 00:22:57.484 } 00:22:57.484 }, 00:22:57.484 { 00:22:57.484 "method": "sock_impl_set_options", 00:22:57.484 "params": { 00:22:57.484 "impl_name": "ssl", 00:22:57.484 "recv_buf_size": 4096, 00:22:57.484 "send_buf_size": 4096, 00:22:57.484 "enable_recv_pipe": true, 00:22:57.484 "enable_quickack": false, 00:22:57.484 "enable_placement_id": 0, 00:22:57.484 "enable_zerocopy_send_server": true, 00:22:57.484 "enable_zerocopy_send_client": false, 00:22:57.484 "zerocopy_threshold": 0, 00:22:57.484 "tls_version": 0, 00:22:57.484 "enable_ktls": false 00:22:57.484 } 00:22:57.484 }, 00:22:57.484 { 00:22:57.484 "method": "sock_impl_set_options", 00:22:57.484 "params": { 00:22:57.484 "impl_name": "posix", 00:22:57.484 "recv_buf_size": 2097152, 00:22:57.484 "send_buf_size": 2097152, 00:22:57.484 "enable_recv_pipe": true, 00:22:57.484 "enable_quickack": false, 00:22:57.484 "enable_placement_id": 0, 00:22:57.484 "enable_zerocopy_send_server": true, 00:22:57.484 "enable_zerocopy_send_client": false, 00:22:57.484 "zerocopy_threshold": 0, 00:22:57.484 "tls_version": 0, 00:22:57.484 "enable_ktls": false 00:22:57.484 } 00:22:57.484 } 00:22:57.484 ] 00:22:57.484 }, 00:22:57.484 { 00:22:57.484 "subsystem": "vmd", 00:22:57.484 "config": [] 00:22:57.484 }, 00:22:57.484 { 00:22:57.484 "subsystem": "accel", 00:22:57.484 "config": [ 00:22:57.484 { 00:22:57.484 "method": "accel_set_options", 00:22:57.484 "params": { 00:22:57.484 "small_cache_size": 128, 00:22:57.484 "large_cache_size": 16, 00:22:57.484 "task_count": 2048, 00:22:57.484 "sequence_count": 2048, 00:22:57.484 "buf_count": 2048 00:22:57.484 } 00:22:57.484 } 00:22:57.484 ] 00:22:57.484 }, 00:22:57.484 { 00:22:57.484 "subsystem": "bdev", 00:22:57.484 "config": [ 00:22:57.484 { 00:22:57.484 "method": "bdev_set_options", 00:22:57.484 "params": { 00:22:57.484 "bdev_io_pool_size": 65535, 00:22:57.484 "bdev_io_cache_size": 256, 00:22:57.484 "bdev_auto_examine": true, 00:22:57.484 "iobuf_small_cache_size": 128, 00:22:57.485 "iobuf_large_cache_size": 16 00:22:57.485 } 00:22:57.485 }, 00:22:57.485 { 00:22:57.485 "method": "bdev_raid_set_options", 00:22:57.485 "params": { 00:22:57.485 "process_window_size_kb": 1024 00:22:57.485 } 00:22:57.485 }, 00:22:57.485 { 00:22:57.485 "method": "bdev_iscsi_set_options", 00:22:57.485 "params": { 00:22:57.485 "timeout_sec": 30 00:22:57.485 } 00:22:57.485 }, 00:22:57.485 { 00:22:57.485 "method": "bdev_nvme_set_options", 00:22:57.485 "params": { 00:22:57.485 "action_on_timeout": "none", 00:22:57.485 "timeout_us": 0, 00:22:57.485 "timeout_admin_us": 0, 00:22:57.485 "keep_alive_timeout_ms": 10000, 00:22:57.485 "arbitration_burst": 0, 00:22:57.485 "low_priority_weight": 0, 00:22:57.485 "medium_priority_weight": 0, 00:22:57.485 "high_priority_weight": 0, 00:22:57.485 "nvme_adminq_poll_period_us": 10000, 00:22:57.485 "nvme_ioq_poll_period_us": 0, 00:22:57.485 "io_queue_requests": 512, 00:22:57.485 "delay_cmd_submit": true, 00:22:57.485 "transport_retry_count": 4, 00:22:57.485 "bdev_retry_count": 3, 00:22:57.485 "transport_ack_timeout": 0, 00:22:57.485 "ctrlr_loss_timeout_sec": 0, 00:22:57.485 "reconnect_delay_sec": 0, 00:22:57.485 "fast_io_fail_timeout_sec": 0, 00:22:57.485 "disable_auto_failback": false, 00:22:57.485 "generate_uuids": false, 00:22:57.485 "transport_tos": 0, 00:22:57.485 "nvme_error_stat": false, 00:22:57.485 "rdma_srq_size": 0, 00:22:57.485 "io_path_stat": false, 00:22:57.485 "allow_accel_sequence": false, 00:22:57.485 "rdma_max_cq_size": 0, 00:22:57.485 "rdma_cm_event_timeout_ms": 0, 00:22:57.485 "dhchap_digests": [ 00:22:57.485 "sha256", 00:22:57.485 "sha384", 00:22:57.485 "sha512" 00:22:57.485 ], 00:22:57.485 "dhchap_dhgroups": [ 00:22:57.485 "null", 00:22:57.485 "ffdhe2048", 00:22:57.485 "ffdhe3072", 00:22:57.485 "ffdhe4096", 00:22:57.485 "ffdhe6144", 00:22:57.485 "ffdhe8192" 00:22:57.485 ] 00:22:57.485 } 00:22:57.485 }, 00:22:57.485 { 00:22:57.485 "method": "bdev_nvme_attach_controller", 00:22:57.485 "params": { 00:22:57.485 "name": "TLSTEST", 00:22:57.485 "trtype": "TCP", 00:22:57.485 "adrfam": "IPv4", 00:22:57.485 "traddr": "10.0.0.2", 00:22:57.485 "trsvcid": "4420", 00:22:57.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.485 "prchk_reftag": false, 00:22:57.485 "prchk_guard": false, 00:22:57.485 "ctrlr_loss_timeout_sec": 0, 00:22:57.485 "reconnect_delay_sec": 0, 00:22:57.485 "fast_io_fail_timeout_sec": 0, 00:22:57.485 "psk": "/tmp/tmp.JxAeA3gswv", 00:22:57.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.485 "hdgst": false, 00:22:57.485 "ddgst": false 00:22:57.485 } 00:22:57.485 }, 00:22:57.485 { 00:22:57.485 "method": "bdev_nvme_set_hotplug", 00:22:57.485 "params": { 00:22:57.485 "period_us": 100000, 00:22:57.485 "enable": false 00:22:57.485 } 00:22:57.485 }, 00:22:57.485 { 00:22:57.485 "method": "bdev_wait_for_examine" 00:22:57.485 } 00:22:57.485 ] 00:22:57.485 }, 00:22:57.485 { 00:22:57.485 "subsystem": "nbd", 00:22:57.485 "config": [] 00:22:57.485 } 00:22:57.485 ] 00:22:57.485 }' 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 467586 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 467586 ']' 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 467586 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 467586 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 467586' 00:22:57.485 killing process with pid 467586 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 467586 00:22:57.485 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.485 00:22:57.485 Latency(us) 00:22:57.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.485 =================================================================================================================== 00:22:57.485 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.485 [2024-06-08 00:48:15.738682] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:57.485 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 467586 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 467224 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 467224 ']' 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 467224 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 467224 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 467224' 00:22:57.746 killing process with pid 467224 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 467224 00:22:57.746 [2024-06-08 00:48:15.904112] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:57.746 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 467224 00:22:57.746 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:57.746 00:48:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.746 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:57.746 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.746 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:57.747 "subsystems": [ 00:22:57.747 { 00:22:57.747 "subsystem": "keyring", 00:22:57.747 "config": [] 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "subsystem": "iobuf", 00:22:57.747 "config": [ 00:22:57.747 { 00:22:57.747 "method": "iobuf_set_options", 00:22:57.747 "params": { 00:22:57.747 "small_pool_count": 8192, 00:22:57.747 "large_pool_count": 1024, 00:22:57.747 "small_bufsize": 8192, 00:22:57.747 "large_bufsize": 135168 00:22:57.747 } 00:22:57.747 } 00:22:57.747 ] 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "subsystem": "sock", 00:22:57.747 "config": [ 00:22:57.747 { 00:22:57.747 "method": "sock_set_default_impl", 00:22:57.747 "params": { 00:22:57.747 "impl_name": "posix" 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "sock_impl_set_options", 00:22:57.747 "params": { 00:22:57.747 "impl_name": "ssl", 00:22:57.747 "recv_buf_size": 4096, 00:22:57.747 "send_buf_size": 4096, 00:22:57.747 "enable_recv_pipe": true, 00:22:57.747 "enable_quickack": false, 00:22:57.747 "enable_placement_id": 0, 00:22:57.747 "enable_zerocopy_send_server": true, 00:22:57.747 "enable_zerocopy_send_client": false, 00:22:57.747 "zerocopy_threshold": 0, 00:22:57.747 "tls_version": 0, 00:22:57.747 "enable_ktls": false 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "sock_impl_set_options", 00:22:57.747 "params": { 00:22:57.747 "impl_name": "posix", 00:22:57.747 "recv_buf_size": 2097152, 00:22:57.747 "send_buf_size": 2097152, 00:22:57.747 "enable_recv_pipe": true, 00:22:57.747 "enable_quickack": false, 00:22:57.747 "enable_placement_id": 0, 00:22:57.747 "enable_zerocopy_send_server": true, 00:22:57.747 "enable_zerocopy_send_client": false, 00:22:57.747 "zerocopy_threshold": 0, 00:22:57.747 "tls_version": 0, 00:22:57.747 "enable_ktls": false 00:22:57.747 } 00:22:57.747 } 00:22:57.747 ] 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "subsystem": "vmd", 00:22:57.747 "config": [] 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "subsystem": "accel", 00:22:57.747 "config": [ 00:22:57.747 { 00:22:57.747 "method": "accel_set_options", 00:22:57.747 "params": { 00:22:57.747 "small_cache_size": 128, 00:22:57.747 "large_cache_size": 16, 00:22:57.747 "task_count": 2048, 00:22:57.747 "sequence_count": 2048, 00:22:57.747 "buf_count": 2048 00:22:57.747 } 00:22:57.747 } 00:22:57.747 ] 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "subsystem": "bdev", 00:22:57.747 "config": [ 00:22:57.747 { 00:22:57.747 "method": "bdev_set_options", 00:22:57.747 "params": { 00:22:57.747 "bdev_io_pool_size": 65535, 00:22:57.747 "bdev_io_cache_size": 256, 00:22:57.747 "bdev_auto_examine": true, 00:22:57.747 "iobuf_small_cache_size": 128, 00:22:57.747 "iobuf_large_cache_size": 16 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "bdev_raid_set_options", 00:22:57.747 "params": { 00:22:57.747 "process_window_size_kb": 1024 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "bdev_iscsi_set_options", 00:22:57.747 "params": { 00:22:57.747 "timeout_sec": 30 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "bdev_nvme_set_options", 00:22:57.747 "params": { 00:22:57.747 "action_on_timeout": "none", 00:22:57.747 "timeout_us": 0, 00:22:57.747 "timeout_admin_us": 0, 00:22:57.747 "keep_alive_timeout_ms": 10000, 00:22:57.747 "arbitration_burst": 0, 00:22:57.747 "low_priority_weight": 0, 00:22:57.747 "medium_priority_weight": 0, 00:22:57.747 "high_priority_weight": 0, 00:22:57.747 "nvme_adminq_poll_period_us": 10000, 00:22:57.747 "nvme_ioq_poll_period_us": 0, 00:22:57.747 "io_queue_requests": 0, 00:22:57.747 "delay_cmd_submit": true, 00:22:57.747 "transport_retry_count": 4, 00:22:57.747 "bdev_retry_count": 3, 00:22:57.747 "transport_ack_timeout": 0, 00:22:57.747 "ctrlr_loss_timeout_sec": 0, 00:22:57.747 "reconnect_delay_sec": 0, 00:22:57.747 "fast_io_fail_timeout_sec": 0, 00:22:57.747 "disable_auto_failback": false, 00:22:57.747 "generate_uuids": false, 00:22:57.747 "transport_tos": 0, 00:22:57.747 "nvme_error_stat": false, 00:22:57.747 "rdma_srq_size": 0, 00:22:57.747 "io_path_stat": false, 00:22:57.747 "allow_accel_sequence": false, 00:22:57.747 "rdma_max_cq_size": 0, 00:22:57.747 "rdma_cm_event_timeout_ms": 0, 00:22:57.747 "dhchap_digests": [ 00:22:57.747 "sha256", 00:22:57.747 "sha384", 00:22:57.747 "sha512" 00:22:57.747 ], 00:22:57.747 "dhchap_dhgroups": [ 00:22:57.747 "null", 00:22:57.747 "ffdhe2048", 00:22:57.747 "ffdhe3072", 00:22:57.747 "ffdhe4096", 00:22:57.747 "ffdhe6144", 00:22:57.747 "ffdhe8192" 00:22:57.747 ] 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "bdev_nvme_set_hotplug", 00:22:57.747 "params": { 00:22:57.747 "period_us": 100000, 00:22:57.747 "enable": false 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "bdev_malloc_create", 00:22:57.747 "params": { 00:22:57.747 "name": "malloc0", 00:22:57.747 "num_blocks": 8192, 00:22:57.747 "block_size": 4096, 00:22:57.747 "physical_block_size": 4096, 00:22:57.747 "uuid": "c9495c22-dcd3-48ba-b257-7c3d0ded0ad6", 00:22:57.747 "optimal_io_boundary": 0 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "bdev_wait_for_examine" 00:22:57.747 } 00:22:57.747 ] 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "subsystem": "nbd", 00:22:57.747 "config": [] 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "subsystem": "scheduler", 00:22:57.747 "config": [ 00:22:57.747 { 00:22:57.747 "method": "framework_set_scheduler", 00:22:57.747 "params": { 00:22:57.747 "name": "static" 00:22:57.747 } 00:22:57.747 } 00:22:57.747 ] 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "subsystem": "nvmf", 00:22:57.747 "config": [ 00:22:57.747 { 00:22:57.747 "method": "nvmf_set_config", 00:22:57.747 "params": { 00:22:57.747 "discovery_filter": "match_any", 00:22:57.747 "admin_cmd_passthru": { 00:22:57.747 "identify_ctrlr": false 00:22:57.747 } 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "nvmf_set_max_subsystems", 00:22:57.747 "params": { 00:22:57.747 "max_subsystems": 1024 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "nvmf_set_crdt", 00:22:57.747 "params": { 00:22:57.747 "crdt1": 0, 00:22:57.747 "crdt2": 0, 00:22:57.747 "crdt3": 0 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "nvmf_create_transport", 00:22:57.747 "params": { 00:22:57.747 "trtype": "TCP", 00:22:57.747 "max_queue_depth": 128, 00:22:57.747 "max_io_qpairs_per_ctrlr": 127, 00:22:57.747 "in_capsule_data_size": 4096, 00:22:57.747 "max_io_size": 131072, 00:22:57.747 "io_unit_size": 131072, 00:22:57.747 "max_aq_depth": 128, 00:22:57.747 "num_shared_buffers": 511, 00:22:57.747 "buf_cache_size": 4294967295, 00:22:57.747 "dif_insert_or_strip": false, 00:22:57.747 "zcopy": false, 00:22:57.747 "c2h_success": false, 00:22:57.747 "sock_priority": 0, 00:22:57.747 "abort_timeout_sec": 1, 00:22:57.747 "ack_timeout": 0, 00:22:57.747 "data_wr_pool_size": 0 00:22:57.747 } 00:22:57.747 }, 00:22:57.747 { 00:22:57.747 "method": "nvmf_create_subsystem", 00:22:57.747 "params": { 00:22:57.747 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.747 "allow_any_host": false, 00:22:57.747 "serial_number": "SPDK00000000000001", 00:22:57.747 "model_number": "SPDK bdev Controller", 00:22:57.747 "max_namespaces": 10, 00:22:57.747 "min_cntlid": 1, 00:22:57.747 "max_cntlid": 65519, 00:22:57.748 "ana_reporting": false 00:22:57.748 } 00:22:57.748 }, 00:22:57.748 { 00:22:57.748 "method": "nvmf_subsystem_add_host", 00:22:57.748 "params": { 00:22:57.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.748 "host": "nqn.2016-06.io.spdk:host1", 00:22:57.748 "psk": "/tmp/tmp.JxAeA3gswv" 00:22:57.748 } 00:22:57.748 }, 00:22:57.748 { 00:22:57.748 "method": "nvmf_subsystem_add_ns", 00:22:57.748 "params": { 00:22:57.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.748 "namespace": { 00:22:57.748 "nsid": 1, 00:22:57.748 "bdev_name": "malloc0", 00:22:57.748 "nguid": "C9495C22DCD348BAB2577C3D0DED0AD6", 00:22:57.748 "uuid": "c9495c22-dcd3-48ba-b257-7c3d0ded0ad6", 00:22:57.748 "no_auto_visible": false 00:22:57.748 } 00:22:57.748 } 00:22:57.748 }, 00:22:57.748 { 00:22:57.748 "method": "nvmf_subsystem_add_listener", 00:22:57.748 "params": { 00:22:57.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.748 "listen_address": { 00:22:57.748 "trtype": "TCP", 00:22:57.748 "adrfam": "IPv4", 00:22:57.748 "traddr": "10.0.0.2", 00:22:57.748 "trsvcid": "4420" 00:22:57.748 }, 00:22:57.748 "secure_channel": true 00:22:57.748 } 00:22:57.748 } 00:22:57.748 ] 00:22:57.748 } 00:22:57.748 ] 00:22:57.748 }' 00:22:58.009 00:48:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=467975 00:22:58.009 00:48:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 467975 00:22:58.009 00:48:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:58.009 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 467975 ']' 00:22:58.009 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.009 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:58.009 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.009 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:58.009 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.009 [2024-06-08 00:48:16.081040] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:58.009 [2024-06-08 00:48:16.081090] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.009 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.009 [2024-06-08 00:48:16.160888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.009 [2024-06-08 00:48:16.218002] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.009 [2024-06-08 00:48:16.218034] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.009 [2024-06-08 00:48:16.218039] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.009 [2024-06-08 00:48:16.218043] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.009 [2024-06-08 00:48:16.218047] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.009 [2024-06-08 00:48:16.218093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.377 [2024-06-08 00:48:16.401539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.377 [2024-06-08 00:48:16.417500] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:58.377 [2024-06-08 00:48:16.433550] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.377 [2024-06-08 00:48:16.447707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=468268 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 468268 /var/tmp/bdevperf.sock 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 468268 ']' 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.645 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:58.646 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.646 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:58.646 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:58.646 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.646 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:58.646 "subsystems": [ 00:22:58.646 { 00:22:58.646 "subsystem": "keyring", 00:22:58.646 "config": [] 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "subsystem": "iobuf", 00:22:58.646 "config": [ 00:22:58.646 { 00:22:58.646 "method": "iobuf_set_options", 00:22:58.646 "params": { 00:22:58.646 "small_pool_count": 8192, 00:22:58.646 "large_pool_count": 1024, 00:22:58.646 "small_bufsize": 8192, 00:22:58.646 "large_bufsize": 135168 00:22:58.646 } 00:22:58.646 } 00:22:58.646 ] 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "subsystem": "sock", 00:22:58.646 "config": [ 00:22:58.646 { 00:22:58.646 "method": "sock_set_default_impl", 00:22:58.646 "params": { 00:22:58.646 "impl_name": "posix" 00:22:58.646 } 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "method": "sock_impl_set_options", 00:22:58.646 "params": { 00:22:58.646 "impl_name": "ssl", 00:22:58.646 "recv_buf_size": 4096, 00:22:58.646 "send_buf_size": 4096, 00:22:58.646 "enable_recv_pipe": true, 00:22:58.646 "enable_quickack": false, 00:22:58.646 "enable_placement_id": 0, 00:22:58.646 "enable_zerocopy_send_server": true, 00:22:58.646 "enable_zerocopy_send_client": false, 00:22:58.646 "zerocopy_threshold": 0, 00:22:58.646 "tls_version": 0, 00:22:58.646 "enable_ktls": false 00:22:58.646 } 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "method": "sock_impl_set_options", 00:22:58.646 "params": { 00:22:58.646 "impl_name": "posix", 00:22:58.646 "recv_buf_size": 2097152, 00:22:58.646 "send_buf_size": 2097152, 00:22:58.646 "enable_recv_pipe": true, 00:22:58.646 "enable_quickack": false, 00:22:58.646 "enable_placement_id": 0, 00:22:58.646 "enable_zerocopy_send_server": true, 00:22:58.646 "enable_zerocopy_send_client": false, 00:22:58.646 "zerocopy_threshold": 0, 00:22:58.646 "tls_version": 0, 00:22:58.646 "enable_ktls": false 00:22:58.646 } 00:22:58.646 } 00:22:58.646 ] 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "subsystem": "vmd", 00:22:58.646 "config": [] 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "subsystem": "accel", 00:22:58.646 "config": [ 00:22:58.646 { 00:22:58.646 "method": "accel_set_options", 00:22:58.646 "params": { 00:22:58.646 "small_cache_size": 128, 00:22:58.646 "large_cache_size": 16, 00:22:58.646 "task_count": 2048, 00:22:58.646 "sequence_count": 2048, 00:22:58.646 "buf_count": 2048 00:22:58.646 } 00:22:58.646 } 00:22:58.646 ] 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "subsystem": "bdev", 00:22:58.646 "config": [ 00:22:58.646 { 00:22:58.646 "method": "bdev_set_options", 00:22:58.646 "params": { 00:22:58.646 "bdev_io_pool_size": 65535, 00:22:58.646 "bdev_io_cache_size": 256, 00:22:58.646 "bdev_auto_examine": true, 00:22:58.646 "iobuf_small_cache_size": 128, 00:22:58.646 "iobuf_large_cache_size": 16 00:22:58.646 } 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "method": "bdev_raid_set_options", 00:22:58.646 "params": { 00:22:58.646 "process_window_size_kb": 1024 00:22:58.646 } 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "method": "bdev_iscsi_set_options", 00:22:58.646 "params": { 00:22:58.646 "timeout_sec": 30 00:22:58.646 } 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "method": "bdev_nvme_set_options", 00:22:58.646 "params": { 00:22:58.646 "action_on_timeout": "none", 00:22:58.646 "timeout_us": 0, 00:22:58.646 "timeout_admin_us": 0, 00:22:58.646 "keep_alive_timeout_ms": 10000, 00:22:58.646 "arbitration_burst": 0, 00:22:58.646 "low_priority_weight": 0, 00:22:58.646 "medium_priority_weight": 0, 00:22:58.646 "high_priority_weight": 0, 00:22:58.646 "nvme_adminq_poll_period_us": 10000, 00:22:58.646 "nvme_ioq_poll_period_us": 0, 00:22:58.646 "io_queue_requests": 512, 00:22:58.646 "delay_cmd_submit": true, 00:22:58.646 "transport_retry_count": 4, 00:22:58.646 "bdev_retry_count": 3, 00:22:58.646 "transport_ack_timeout": 0, 00:22:58.646 "ctrlr_loss_timeout_sec": 0, 00:22:58.646 "reconnect_delay_sec": 0, 00:22:58.646 "fast_io_fail_timeout_sec": 0, 00:22:58.646 "disable_auto_failback": false, 00:22:58.646 "generate_uuids": false, 00:22:58.646 "transport_tos": 0, 00:22:58.646 "nvme_error_stat": false, 00:22:58.646 "rdma_srq_size": 0, 00:22:58.646 "io_path_stat": false, 00:22:58.646 "allow_accel_sequence": false, 00:22:58.646 "rdma_max_cq_size": 0, 00:22:58.646 "rdma_cm_event_timeout_ms": 0, 00:22:58.646 "dhchap_digests": [ 00:22:58.646 "sha256", 00:22:58.646 "sha384", 00:22:58.646 "sha512" 00:22:58.646 ], 00:22:58.646 "dhchap_dhgroups": [ 00:22:58.646 "null", 00:22:58.646 "ffdhe2048", 00:22:58.646 "ffdhe3072", 00:22:58.646 "ffdhe4096", 00:22:58.646 "ffdhe6144", 00:22:58.646 "ffdhe8192" 00:22:58.646 ] 00:22:58.646 } 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "method": "bdev_nvme_attach_controller", 00:22:58.646 "params": { 00:22:58.646 "name": "TLSTEST", 00:22:58.646 "trtype": "TCP", 00:22:58.646 "adrfam": "IPv4", 00:22:58.646 "traddr": "10.0.0.2", 00:22:58.646 "trsvcid": "4420", 00:22:58.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.646 "prchk_reftag": false, 00:22:58.646 "prchk_guard": false, 00:22:58.646 "ctrlr_loss_timeout_sec": 0, 00:22:58.646 "reconnect_delay_sec": 0, 00:22:58.646 "fast_io_fail_timeout_sec": 0, 00:22:58.646 "psk": "/tmp/tmp.JxAeA3gswv", 00:22:58.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.646 "hdgst": false, 00:22:58.646 "ddgst": false 00:22:58.646 } 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "method": "bdev_nvme_set_hotplug", 00:22:58.646 "params": { 00:22:58.646 "period_us": 100000, 00:22:58.646 "enable": false 00:22:58.646 } 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "method": "bdev_wait_for_examine" 00:22:58.646 } 00:22:58.646 ] 00:22:58.646 }, 00:22:58.646 { 00:22:58.646 "subsystem": "nbd", 00:22:58.646 "config": [] 00:22:58.646 } 00:22:58.646 ] 00:22:58.646 }' 00:22:58.646 [2024-06-08 00:48:16.926926] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:58.646 [2024-06-08 00:48:16.926978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468268 ] 00:22:58.908 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.908 [2024-06-08 00:48:16.976706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.908 [2024-06-08 00:48:17.029357] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.908 [2024-06-08 00:48:17.153820] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.908 [2024-06-08 00:48:17.153883] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:59.479 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:59.479 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:59.479 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:59.739 Running I/O for 10 seconds... 00:23:09.738 00:23:09.739 Latency(us) 00:23:09.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.739 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:09.739 Verification LBA range: start 0x0 length 0x2000 00:23:09.739 TLSTESTn1 : 10.06 3346.13 13.07 0.00 0.00 38129.05 6116.69 135441.07 00:23:09.739 =================================================================================================================== 00:23:09.739 Total : 3346.13 13.07 0.00 0.00 38129.05 6116.69 135441.07 00:23:09.739 0 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 468268 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 468268 ']' 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 468268 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 468268 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 468268' 00:23:09.739 killing process with pid 468268 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 468268 00:23:09.739 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.739 00:23:09.739 Latency(us) 00:23:09.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.739 =================================================================================================================== 00:23:09.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.739 [2024-06-08 00:48:27.996903] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:09.739 00:48:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 468268 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 467975 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 467975 ']' 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 467975 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 467975 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 467975' 00:23:09.999 killing process with pid 467975 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 467975 00:23:09.999 [2024-06-08 00:48:28.164962] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:09.999 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 467975 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=470395 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 470395 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 470395 ']' 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:10.261 00:48:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.261 [2024-06-08 00:48:28.351339] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:10.261 [2024-06-08 00:48:28.351419] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.261 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.261 [2024-06-08 00:48:28.419484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.261 [2024-06-08 00:48:28.484694] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.261 [2024-06-08 00:48:28.484732] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.261 [2024-06-08 00:48:28.484739] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.261 [2024-06-08 00:48:28.484749] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.261 [2024-06-08 00:48:28.484754] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.261 [2024-06-08 00:48:28.484779] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.203 00:48:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:11.203 00:48:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:11.203 00:48:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.203 00:48:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:11.203 00:48:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.203 00:48:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.203 00:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.JxAeA3gswv 00:23:11.203 00:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JxAeA3gswv 00:23:11.203 00:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:11.203 [2024-06-08 00:48:29.299496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.203 00:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:11.463 00:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:11.463 [2024-06-08 00:48:29.628313] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.464 [2024-06-08 00:48:29.628517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.464 00:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:11.724 malloc0 00:23:11.724 00:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.724 00:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JxAeA3gswv 00:23:11.985 [2024-06-08 00:48:30.096218] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:11.985 00:48:30 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=470809 00:23:11.985 00:48:30 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.985 00:48:30 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:11.985 00:48:30 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 470809 /var/tmp/bdevperf.sock 00:23:11.985 00:48:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 470809 ']' 00:23:11.985 00:48:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.985 00:48:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:11.985 00:48:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.985 00:48:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:11.985 00:48:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.985 [2024-06-08 00:48:30.173071] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:11.985 [2024-06-08 00:48:30.173120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470809 ] 00:23:11.985 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.985 [2024-06-08 00:48:30.248850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.246 [2024-06-08 00:48:30.302405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.816 00:48:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:12.816 00:48:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:12.816 00:48:30 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JxAeA3gswv 00:23:12.816 00:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:13.076 [2024-06-08 00:48:31.224694] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.076 nvme0n1 00:23:13.076 00:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:13.336 Running I/O for 1 seconds... 00:23:14.277 00:23:14.277 Latency(us) 00:23:14.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.277 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:14.277 Verification LBA range: start 0x0 length 0x2000 00:23:14.277 nvme0n1 : 1.06 2126.73 8.31 0.00 0.00 58709.58 5652.48 107915.95 00:23:14.277 =================================================================================================================== 00:23:14.277 Total : 2126.73 8.31 0.00 0.00 58709.58 5652.48 107915.95 00:23:14.277 0 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 470809 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 470809 ']' 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 470809 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 470809 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 470809' 00:23:14.277 killing process with pid 470809 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 470809 00:23:14.277 Received shutdown signal, test time was about 1.000000 seconds 00:23:14.277 00:23:14.277 Latency(us) 00:23:14.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.277 =================================================================================================================== 00:23:14.277 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.277 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 470809 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 470395 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 470395 ']' 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 470395 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 470395 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 470395' 00:23:14.537 killing process with pid 470395 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 470395 00:23:14.537 [2024-06-08 00:48:32.683270] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:14.537 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 470395 00:23:14.797 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=471334 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 471334 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 471334 ']' 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:14.798 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.798 [2024-06-08 00:48:32.880416] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:14.798 [2024-06-08 00:48:32.880472] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.798 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.798 [2024-06-08 00:48:32.944030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.798 [2024-06-08 00:48:33.008274] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.798 [2024-06-08 00:48:33.008309] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.798 [2024-06-08 00:48:33.008317] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.798 [2024-06-08 00:48:33.008324] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.798 [2024-06-08 00:48:33.008329] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.798 [2024-06-08 00:48:33.008347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.368 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:15.368 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:15.368 00:48:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.368 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:15.368 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.628 [2024-06-08 00:48:33.686948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.628 malloc0 00:23:15.628 [2024-06-08 00:48:33.713735] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.628 [2024-06-08 00:48:33.713939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=471578 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 471578 /var/tmp/bdevperf.sock 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 471578 ']' 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:15.628 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.628 [2024-06-08 00:48:33.790493] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:15.628 [2024-06-08 00:48:33.790542] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471578 ] 00:23:15.628 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.628 [2024-06-08 00:48:33.865799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.888 [2024-06-08 00:48:33.919830] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.458 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:16.458 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:16.458 00:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JxAeA3gswv 00:23:16.458 00:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:16.718 [2024-06-08 00:48:34.865734] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.718 nvme0n1 00:23:16.718 00:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:16.978 Running I/O for 1 seconds... 00:23:17.919 00:23:17.919 Latency(us) 00:23:17.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.919 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:17.919 Verification LBA range: start 0x0 length 0x2000 00:23:17.919 nvme0n1 : 1.03 2617.23 10.22 0.00 0.00 48209.11 5679.79 58108.59 00:23:17.919 =================================================================================================================== 00:23:17.919 Total : 2617.23 10.22 0.00 0.00 48209.11 5679.79 58108.59 00:23:17.919 0 00:23:17.919 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:17.919 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:17.919 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.179 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.179 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:18.179 "subsystems": [ 00:23:18.179 { 00:23:18.179 "subsystem": "keyring", 00:23:18.179 "config": [ 00:23:18.179 { 00:23:18.179 "method": "keyring_file_add_key", 00:23:18.179 "params": { 00:23:18.179 "name": "key0", 00:23:18.179 "path": "/tmp/tmp.JxAeA3gswv" 00:23:18.179 } 00:23:18.179 } 00:23:18.179 ] 00:23:18.179 }, 00:23:18.179 { 00:23:18.179 "subsystem": "iobuf", 00:23:18.179 "config": [ 00:23:18.179 { 00:23:18.179 "method": "iobuf_set_options", 00:23:18.179 "params": { 00:23:18.179 "small_pool_count": 8192, 00:23:18.179 "large_pool_count": 1024, 00:23:18.179 "small_bufsize": 8192, 00:23:18.179 "large_bufsize": 135168 00:23:18.179 } 00:23:18.179 } 00:23:18.179 ] 00:23:18.179 }, 00:23:18.179 { 00:23:18.179 "subsystem": "sock", 00:23:18.179 "config": [ 00:23:18.179 { 00:23:18.179 "method": "sock_set_default_impl", 00:23:18.179 "params": { 00:23:18.179 "impl_name": "posix" 00:23:18.179 } 00:23:18.179 }, 00:23:18.179 { 00:23:18.179 "method": "sock_impl_set_options", 00:23:18.179 "params": { 00:23:18.179 "impl_name": "ssl", 00:23:18.179 "recv_buf_size": 4096, 00:23:18.179 "send_buf_size": 4096, 00:23:18.179 "enable_recv_pipe": true, 00:23:18.179 "enable_quickack": false, 00:23:18.179 "enable_placement_id": 0, 00:23:18.179 "enable_zerocopy_send_server": true, 00:23:18.179 "enable_zerocopy_send_client": false, 00:23:18.179 "zerocopy_threshold": 0, 00:23:18.179 "tls_version": 0, 00:23:18.179 "enable_ktls": false 00:23:18.179 } 00:23:18.179 }, 00:23:18.179 { 00:23:18.179 "method": "sock_impl_set_options", 00:23:18.179 "params": { 00:23:18.179 "impl_name": "posix", 00:23:18.179 "recv_buf_size": 2097152, 00:23:18.179 "send_buf_size": 2097152, 00:23:18.179 "enable_recv_pipe": true, 00:23:18.179 "enable_quickack": false, 00:23:18.179 "enable_placement_id": 0, 00:23:18.180 "enable_zerocopy_send_server": true, 00:23:18.180 "enable_zerocopy_send_client": false, 00:23:18.180 "zerocopy_threshold": 0, 00:23:18.180 "tls_version": 0, 00:23:18.180 "enable_ktls": false 00:23:18.180 } 00:23:18.180 } 00:23:18.180 ] 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "subsystem": "vmd", 00:23:18.180 "config": [] 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "subsystem": "accel", 00:23:18.180 "config": [ 00:23:18.180 { 00:23:18.180 "method": "accel_set_options", 00:23:18.180 "params": { 00:23:18.180 "small_cache_size": 128, 00:23:18.180 "large_cache_size": 16, 00:23:18.180 "task_count": 2048, 00:23:18.180 "sequence_count": 2048, 00:23:18.180 "buf_count": 2048 00:23:18.180 } 00:23:18.180 } 00:23:18.180 ] 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "subsystem": "bdev", 00:23:18.180 "config": [ 00:23:18.180 { 00:23:18.180 "method": "bdev_set_options", 00:23:18.180 "params": { 00:23:18.180 "bdev_io_pool_size": 65535, 00:23:18.180 "bdev_io_cache_size": 256, 00:23:18.180 "bdev_auto_examine": true, 00:23:18.180 "iobuf_small_cache_size": 128, 00:23:18.180 "iobuf_large_cache_size": 16 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "bdev_raid_set_options", 00:23:18.180 "params": { 00:23:18.180 "process_window_size_kb": 1024 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "bdev_iscsi_set_options", 00:23:18.180 "params": { 00:23:18.180 "timeout_sec": 30 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "bdev_nvme_set_options", 00:23:18.180 "params": { 00:23:18.180 "action_on_timeout": "none", 00:23:18.180 "timeout_us": 0, 00:23:18.180 "timeout_admin_us": 0, 00:23:18.180 "keep_alive_timeout_ms": 10000, 00:23:18.180 "arbitration_burst": 0, 00:23:18.180 "low_priority_weight": 0, 00:23:18.180 "medium_priority_weight": 0, 00:23:18.180 "high_priority_weight": 0, 00:23:18.180 "nvme_adminq_poll_period_us": 10000, 00:23:18.180 "nvme_ioq_poll_period_us": 0, 00:23:18.180 "io_queue_requests": 0, 00:23:18.180 "delay_cmd_submit": true, 00:23:18.180 "transport_retry_count": 4, 00:23:18.180 "bdev_retry_count": 3, 00:23:18.180 "transport_ack_timeout": 0, 00:23:18.180 "ctrlr_loss_timeout_sec": 0, 00:23:18.180 "reconnect_delay_sec": 0, 00:23:18.180 "fast_io_fail_timeout_sec": 0, 00:23:18.180 "disable_auto_failback": false, 00:23:18.180 "generate_uuids": false, 00:23:18.180 "transport_tos": 0, 00:23:18.180 "nvme_error_stat": false, 00:23:18.180 "rdma_srq_size": 0, 00:23:18.180 "io_path_stat": false, 00:23:18.180 "allow_accel_sequence": false, 00:23:18.180 "rdma_max_cq_size": 0, 00:23:18.180 "rdma_cm_event_timeout_ms": 0, 00:23:18.180 "dhchap_digests": [ 00:23:18.180 "sha256", 00:23:18.180 "sha384", 00:23:18.180 "sha512" 00:23:18.180 ], 00:23:18.180 "dhchap_dhgroups": [ 00:23:18.180 "null", 00:23:18.180 "ffdhe2048", 00:23:18.180 "ffdhe3072", 00:23:18.180 "ffdhe4096", 00:23:18.180 "ffdhe6144", 00:23:18.180 "ffdhe8192" 00:23:18.180 ] 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "bdev_nvme_set_hotplug", 00:23:18.180 "params": { 00:23:18.180 "period_us": 100000, 00:23:18.180 "enable": false 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "bdev_malloc_create", 00:23:18.180 "params": { 00:23:18.180 "name": "malloc0", 00:23:18.180 "num_blocks": 8192, 00:23:18.180 "block_size": 4096, 00:23:18.180 "physical_block_size": 4096, 00:23:18.180 "uuid": "615faf08-7e67-4589-85e4-22adf6dba3da", 00:23:18.180 "optimal_io_boundary": 0 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "bdev_wait_for_examine" 00:23:18.180 } 00:23:18.180 ] 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "subsystem": "nbd", 00:23:18.180 "config": [] 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "subsystem": "scheduler", 00:23:18.180 "config": [ 00:23:18.180 { 00:23:18.180 "method": "framework_set_scheduler", 00:23:18.180 "params": { 00:23:18.180 "name": "static" 00:23:18.180 } 00:23:18.180 } 00:23:18.180 ] 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "subsystem": "nvmf", 00:23:18.180 "config": [ 00:23:18.180 { 00:23:18.180 "method": "nvmf_set_config", 00:23:18.180 "params": { 00:23:18.180 "discovery_filter": "match_any", 00:23:18.180 "admin_cmd_passthru": { 00:23:18.180 "identify_ctrlr": false 00:23:18.180 } 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "nvmf_set_max_subsystems", 00:23:18.180 "params": { 00:23:18.180 "max_subsystems": 1024 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "nvmf_set_crdt", 00:23:18.180 "params": { 00:23:18.180 "crdt1": 0, 00:23:18.180 "crdt2": 0, 00:23:18.180 "crdt3": 0 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "nvmf_create_transport", 00:23:18.180 "params": { 00:23:18.180 "trtype": "TCP", 00:23:18.180 "max_queue_depth": 128, 00:23:18.180 "max_io_qpairs_per_ctrlr": 127, 00:23:18.180 "in_capsule_data_size": 4096, 00:23:18.180 "max_io_size": 131072, 00:23:18.180 "io_unit_size": 131072, 00:23:18.180 "max_aq_depth": 128, 00:23:18.180 "num_shared_buffers": 511, 00:23:18.180 "buf_cache_size": 4294967295, 00:23:18.180 "dif_insert_or_strip": false, 00:23:18.180 "zcopy": false, 00:23:18.180 "c2h_success": false, 00:23:18.180 "sock_priority": 0, 00:23:18.180 "abort_timeout_sec": 1, 00:23:18.180 "ack_timeout": 0, 00:23:18.180 "data_wr_pool_size": 0 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "nvmf_create_subsystem", 00:23:18.180 "params": { 00:23:18.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.180 "allow_any_host": false, 00:23:18.180 "serial_number": "00000000000000000000", 00:23:18.180 "model_number": "SPDK bdev Controller", 00:23:18.180 "max_namespaces": 32, 00:23:18.180 "min_cntlid": 1, 00:23:18.180 "max_cntlid": 65519, 00:23:18.180 "ana_reporting": false 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "nvmf_subsystem_add_host", 00:23:18.180 "params": { 00:23:18.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.180 "host": "nqn.2016-06.io.spdk:host1", 00:23:18.180 "psk": "key0" 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "nvmf_subsystem_add_ns", 00:23:18.180 "params": { 00:23:18.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.180 "namespace": { 00:23:18.180 "nsid": 1, 00:23:18.180 "bdev_name": "malloc0", 00:23:18.180 "nguid": "615FAF087E67458985E422ADF6DBA3DA", 00:23:18.180 "uuid": "615faf08-7e67-4589-85e4-22adf6dba3da", 00:23:18.180 "no_auto_visible": false 00:23:18.180 } 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "nvmf_subsystem_add_listener", 00:23:18.180 "params": { 00:23:18.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.180 "listen_address": { 00:23:18.180 "trtype": "TCP", 00:23:18.180 "adrfam": "IPv4", 00:23:18.180 "traddr": "10.0.0.2", 00:23:18.180 "trsvcid": "4420" 00:23:18.180 }, 00:23:18.180 "secure_channel": true 00:23:18.180 } 00:23:18.180 } 00:23:18.180 ] 00:23:18.180 } 00:23:18.180 ] 00:23:18.180 }' 00:23:18.180 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:18.180 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:18.180 "subsystems": [ 00:23:18.180 { 00:23:18.180 "subsystem": "keyring", 00:23:18.180 "config": [ 00:23:18.180 { 00:23:18.180 "method": "keyring_file_add_key", 00:23:18.180 "params": { 00:23:18.180 "name": "key0", 00:23:18.180 "path": "/tmp/tmp.JxAeA3gswv" 00:23:18.180 } 00:23:18.180 } 00:23:18.180 ] 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "subsystem": "iobuf", 00:23:18.180 "config": [ 00:23:18.180 { 00:23:18.180 "method": "iobuf_set_options", 00:23:18.180 "params": { 00:23:18.180 "small_pool_count": 8192, 00:23:18.180 "large_pool_count": 1024, 00:23:18.180 "small_bufsize": 8192, 00:23:18.180 "large_bufsize": 135168 00:23:18.180 } 00:23:18.180 } 00:23:18.180 ] 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "subsystem": "sock", 00:23:18.180 "config": [ 00:23:18.180 { 00:23:18.180 "method": "sock_set_default_impl", 00:23:18.180 "params": { 00:23:18.180 "impl_name": "posix" 00:23:18.180 } 00:23:18.180 }, 00:23:18.180 { 00:23:18.180 "method": "sock_impl_set_options", 00:23:18.180 "params": { 00:23:18.180 "impl_name": "ssl", 00:23:18.180 "recv_buf_size": 4096, 00:23:18.180 "send_buf_size": 4096, 00:23:18.180 "enable_recv_pipe": true, 00:23:18.180 "enable_quickack": false, 00:23:18.180 "enable_placement_id": 0, 00:23:18.180 "enable_zerocopy_send_server": true, 00:23:18.181 "enable_zerocopy_send_client": false, 00:23:18.181 "zerocopy_threshold": 0, 00:23:18.181 "tls_version": 0, 00:23:18.181 "enable_ktls": false 00:23:18.181 } 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "method": "sock_impl_set_options", 00:23:18.181 "params": { 00:23:18.181 "impl_name": "posix", 00:23:18.181 "recv_buf_size": 2097152, 00:23:18.181 "send_buf_size": 2097152, 00:23:18.181 "enable_recv_pipe": true, 00:23:18.181 "enable_quickack": false, 00:23:18.181 "enable_placement_id": 0, 00:23:18.181 "enable_zerocopy_send_server": true, 00:23:18.181 "enable_zerocopy_send_client": false, 00:23:18.181 "zerocopy_threshold": 0, 00:23:18.181 "tls_version": 0, 00:23:18.181 "enable_ktls": false 00:23:18.181 } 00:23:18.181 } 00:23:18.181 ] 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "subsystem": "vmd", 00:23:18.181 "config": [] 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "subsystem": "accel", 00:23:18.181 "config": [ 00:23:18.181 { 00:23:18.181 "method": "accel_set_options", 00:23:18.181 "params": { 00:23:18.181 "small_cache_size": 128, 00:23:18.181 "large_cache_size": 16, 00:23:18.181 "task_count": 2048, 00:23:18.181 "sequence_count": 2048, 00:23:18.181 "buf_count": 2048 00:23:18.181 } 00:23:18.181 } 00:23:18.181 ] 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "subsystem": "bdev", 00:23:18.181 "config": [ 00:23:18.181 { 00:23:18.181 "method": "bdev_set_options", 00:23:18.181 "params": { 00:23:18.181 "bdev_io_pool_size": 65535, 00:23:18.181 "bdev_io_cache_size": 256, 00:23:18.181 "bdev_auto_examine": true, 00:23:18.181 "iobuf_small_cache_size": 128, 00:23:18.181 "iobuf_large_cache_size": 16 00:23:18.181 } 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "method": "bdev_raid_set_options", 00:23:18.181 "params": { 00:23:18.181 "process_window_size_kb": 1024 00:23:18.181 } 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "method": "bdev_iscsi_set_options", 00:23:18.181 "params": { 00:23:18.181 "timeout_sec": 30 00:23:18.181 } 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "method": "bdev_nvme_set_options", 00:23:18.181 "params": { 00:23:18.181 "action_on_timeout": "none", 00:23:18.181 "timeout_us": 0, 00:23:18.181 "timeout_admin_us": 0, 00:23:18.181 "keep_alive_timeout_ms": 10000, 00:23:18.181 "arbitration_burst": 0, 00:23:18.181 "low_priority_weight": 0, 00:23:18.181 "medium_priority_weight": 0, 00:23:18.181 "high_priority_weight": 0, 00:23:18.181 "nvme_adminq_poll_period_us": 10000, 00:23:18.181 "nvme_ioq_poll_period_us": 0, 00:23:18.181 "io_queue_requests": 512, 00:23:18.181 "delay_cmd_submit": true, 00:23:18.181 "transport_retry_count": 4, 00:23:18.181 "bdev_retry_count": 3, 00:23:18.181 "transport_ack_timeout": 0, 00:23:18.181 "ctrlr_loss_timeout_sec": 0, 00:23:18.181 "reconnect_delay_sec": 0, 00:23:18.181 "fast_io_fail_timeout_sec": 0, 00:23:18.181 "disable_auto_failback": false, 00:23:18.181 "generate_uuids": false, 00:23:18.181 "transport_tos": 0, 00:23:18.181 "nvme_error_stat": false, 00:23:18.181 "rdma_srq_size": 0, 00:23:18.181 "io_path_stat": false, 00:23:18.181 "allow_accel_sequence": false, 00:23:18.181 "rdma_max_cq_size": 0, 00:23:18.181 "rdma_cm_event_timeout_ms": 0, 00:23:18.181 "dhchap_digests": [ 00:23:18.181 "sha256", 00:23:18.181 "sha384", 00:23:18.181 "sha512" 00:23:18.181 ], 00:23:18.181 "dhchap_dhgroups": [ 00:23:18.181 "null", 00:23:18.181 "ffdhe2048", 00:23:18.181 "ffdhe3072", 00:23:18.181 "ffdhe4096", 00:23:18.181 "ffdhe6144", 00:23:18.181 "ffdhe8192" 00:23:18.181 ] 00:23:18.181 } 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "method": "bdev_nvme_attach_controller", 00:23:18.181 "params": { 00:23:18.181 "name": "nvme0", 00:23:18.181 "trtype": "TCP", 00:23:18.181 "adrfam": "IPv4", 00:23:18.181 "traddr": "10.0.0.2", 00:23:18.181 "trsvcid": "4420", 00:23:18.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.181 "prchk_reftag": false, 00:23:18.181 "prchk_guard": false, 00:23:18.181 "ctrlr_loss_timeout_sec": 0, 00:23:18.181 "reconnect_delay_sec": 0, 00:23:18.181 "fast_io_fail_timeout_sec": 0, 00:23:18.181 "psk": "key0", 00:23:18.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.181 "hdgst": false, 00:23:18.181 "ddgst": false 00:23:18.181 } 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "method": "bdev_nvme_set_hotplug", 00:23:18.181 "params": { 00:23:18.181 "period_us": 100000, 00:23:18.181 "enable": false 00:23:18.181 } 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "method": "bdev_enable_histogram", 00:23:18.181 "params": { 00:23:18.181 "name": "nvme0n1", 00:23:18.181 "enable": true 00:23:18.181 } 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "method": "bdev_wait_for_examine" 00:23:18.181 } 00:23:18.181 ] 00:23:18.181 }, 00:23:18.181 { 00:23:18.181 "subsystem": "nbd", 00:23:18.181 "config": [] 00:23:18.181 } 00:23:18.181 ] 00:23:18.181 }' 00:23:18.181 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 471578 00:23:18.181 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 471578 ']' 00:23:18.181 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 471578 00:23:18.181 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:18.181 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:18.181 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 471578 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 471578' 00:23:18.442 killing process with pid 471578 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 471578 00:23:18.442 Received shutdown signal, test time was about 1.000000 seconds 00:23:18.442 00:23:18.442 Latency(us) 00:23:18.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.442 =================================================================================================================== 00:23:18.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 471578 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 471334 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 471334 ']' 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 471334 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 471334 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 471334' 00:23:18.442 killing process with pid 471334 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 471334 00:23:18.442 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 471334 00:23:18.702 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:18.702 00:48:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.702 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:18.702 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:18.702 "subsystems": [ 00:23:18.702 { 00:23:18.703 "subsystem": "keyring", 00:23:18.703 "config": [ 00:23:18.703 { 00:23:18.703 "method": "keyring_file_add_key", 00:23:18.703 "params": { 00:23:18.703 "name": "key0", 00:23:18.703 "path": "/tmp/tmp.JxAeA3gswv" 00:23:18.703 } 00:23:18.703 } 00:23:18.703 ] 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "subsystem": "iobuf", 00:23:18.703 "config": [ 00:23:18.703 { 00:23:18.703 "method": "iobuf_set_options", 00:23:18.703 "params": { 00:23:18.703 "small_pool_count": 8192, 00:23:18.703 "large_pool_count": 1024, 00:23:18.703 "small_bufsize": 8192, 00:23:18.703 "large_bufsize": 135168 00:23:18.703 } 00:23:18.703 } 00:23:18.703 ] 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "subsystem": "sock", 00:23:18.703 "config": [ 00:23:18.703 { 00:23:18.703 "method": "sock_set_default_impl", 00:23:18.703 "params": { 00:23:18.703 "impl_name": "posix" 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "sock_impl_set_options", 00:23:18.703 "params": { 00:23:18.703 "impl_name": "ssl", 00:23:18.703 "recv_buf_size": 4096, 00:23:18.703 "send_buf_size": 4096, 00:23:18.703 "enable_recv_pipe": true, 00:23:18.703 "enable_quickack": false, 00:23:18.703 "enable_placement_id": 0, 00:23:18.703 "enable_zerocopy_send_server": true, 00:23:18.703 "enable_zerocopy_send_client": false, 00:23:18.703 "zerocopy_threshold": 0, 00:23:18.703 "tls_version": 0, 00:23:18.703 "enable_ktls": false 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "sock_impl_set_options", 00:23:18.703 "params": { 00:23:18.703 "impl_name": "posix", 00:23:18.703 "recv_buf_size": 2097152, 00:23:18.703 "send_buf_size": 2097152, 00:23:18.703 "enable_recv_pipe": true, 00:23:18.703 "enable_quickack": false, 00:23:18.703 "enable_placement_id": 0, 00:23:18.703 "enable_zerocopy_send_server": true, 00:23:18.703 "enable_zerocopy_send_client": false, 00:23:18.703 "zerocopy_threshold": 0, 00:23:18.703 "tls_version": 0, 00:23:18.703 "enable_ktls": false 00:23:18.703 } 00:23:18.703 } 00:23:18.703 ] 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "subsystem": "vmd", 00:23:18.703 "config": [] 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "subsystem": "accel", 00:23:18.703 "config": [ 00:23:18.703 { 00:23:18.703 "method": "accel_set_options", 00:23:18.703 "params": { 00:23:18.703 "small_cache_size": 128, 00:23:18.703 "large_cache_size": 16, 00:23:18.703 "task_count": 2048, 00:23:18.703 "sequence_count": 2048, 00:23:18.703 "buf_count": 2048 00:23:18.703 } 00:23:18.703 } 00:23:18.703 ] 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "subsystem": "bdev", 00:23:18.703 "config": [ 00:23:18.703 { 00:23:18.703 "method": "bdev_set_options", 00:23:18.703 "params": { 00:23:18.703 "bdev_io_pool_size": 65535, 00:23:18.703 "bdev_io_cache_size": 256, 00:23:18.703 "bdev_auto_examine": true, 00:23:18.703 "iobuf_small_cache_size": 128, 00:23:18.703 "iobuf_large_cache_size": 16 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "bdev_raid_set_options", 00:23:18.703 "params": { 00:23:18.703 "process_window_size_kb": 1024 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "bdev_iscsi_set_options", 00:23:18.703 "params": { 00:23:18.703 "timeout_sec": 30 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "bdev_nvme_set_options", 00:23:18.703 "params": { 00:23:18.703 "action_on_timeout": "none", 00:23:18.703 "timeout_us": 0, 00:23:18.703 "timeout_admin_us": 0, 00:23:18.703 "keep_alive_timeout_ms": 10000, 00:23:18.703 "arbitration_burst": 0, 00:23:18.703 "low_priority_weight": 0, 00:23:18.703 "medium_priority_weight": 0, 00:23:18.703 "high_priority_weight": 0, 00:23:18.703 "nvme_adminq_poll_period_us": 10000, 00:23:18.703 "nvme_ioq_poll_period_us": 0, 00:23:18.703 "io_queue_requests": 0, 00:23:18.703 "delay_cmd_submit": true, 00:23:18.703 "transport_retry_count": 4, 00:23:18.703 "bdev_retry_count": 3, 00:23:18.703 "transport_ack_timeout": 0, 00:23:18.703 "ctrlr_loss_timeout_sec": 0, 00:23:18.703 "reconnect_delay_sec": 0, 00:23:18.703 "fast_io_fail_timeout_sec": 0, 00:23:18.703 "disable_auto_failback": false, 00:23:18.703 "generate_uuids": false, 00:23:18.703 "transport_tos": 0, 00:23:18.703 "nvme_error_stat": false, 00:23:18.703 "rdma_srq_size": 0, 00:23:18.703 "io_path_stat": false, 00:23:18.703 "allow_accel_sequence": false, 00:23:18.703 "rdma_max_cq_size": 0, 00:23:18.703 "rdma_cm_event_timeout_ms": 0, 00:23:18.703 "dhchap_digests": [ 00:23:18.703 "sha256", 00:23:18.703 "sha384", 00:23:18.703 "sha512" 00:23:18.703 ], 00:23:18.703 "dhchap_dhgroups": [ 00:23:18.703 "null", 00:23:18.703 "ffdhe2048", 00:23:18.703 "ffdhe3072", 00:23:18.703 "ffdhe4096", 00:23:18.703 "ffdhe6144", 00:23:18.703 "ffdhe8192" 00:23:18.703 ] 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "bdev_nvme_set_hotplug", 00:23:18.703 "params": { 00:23:18.703 "period_us": 100000, 00:23:18.703 "enable": false 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "bdev_malloc_create", 00:23:18.703 "params": { 00:23:18.703 "name": "malloc0", 00:23:18.703 "num_blocks": 8192, 00:23:18.703 "block_size": 4096, 00:23:18.703 "physical_block_size": 4096, 00:23:18.703 "uuid": "615faf08-7e67-4589-85e4-22adf6dba3da", 00:23:18.703 "optimal_io_boundary": 0 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "bdev_wait_for_examine" 00:23:18.703 } 00:23:18.703 ] 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "subsystem": "nbd", 00:23:18.703 "config": [] 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "subsystem": "scheduler", 00:23:18.703 "config": [ 00:23:18.703 { 00:23:18.703 "method": "framework_set_scheduler", 00:23:18.703 "params": { 00:23:18.703 "name": "static" 00:23:18.703 } 00:23:18.703 } 00:23:18.703 ] 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "subsystem": "nvmf", 00:23:18.703 "config": [ 00:23:18.703 { 00:23:18.703 "method": "nvmf_set_config", 00:23:18.703 "params": { 00:23:18.703 "discovery_filter": "match_any", 00:23:18.703 "admin_cmd_passthru": { 00:23:18.703 "identify_ctrlr": false 00:23:18.703 } 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "nvmf_set_max_subsystems", 00:23:18.703 "params": { 00:23:18.703 "max_subsystems": 1024 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "nvmf_set_crdt", 00:23:18.703 "params": { 00:23:18.703 "crdt1": 0, 00:23:18.703 "crdt2": 0, 00:23:18.703 "crdt3": 0 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "nvmf_create_transport", 00:23:18.703 "params": { 00:23:18.703 "trtype": "TCP", 00:23:18.703 "max_queue_depth": 128, 00:23:18.703 "max_io_qpairs_per_ctrlr": 127, 00:23:18.703 "in_capsule_data_size": 4096, 00:23:18.703 "max_io_size": 131072, 00:23:18.703 "io_unit_size": 131072, 00:23:18.703 "max_aq_depth": 128, 00:23:18.703 "num_shared_buffers": 511, 00:23:18.703 "buf_cache_size": 4294967295, 00:23:18.703 "dif_insert_or_strip": false, 00:23:18.703 "zcopy": false, 00:23:18.703 "c2h_success": false, 00:23:18.703 "sock_priority": 0, 00:23:18.703 "abort_timeout_sec": 1, 00:23:18.703 "ack_timeout": 0, 00:23:18.703 "data_wr_pool_size": 0 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "nvmf_create_subsystem", 00:23:18.703 "params": { 00:23:18.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.703 00:23:18.703 "allow_any_host": false, 00:23:18.703 "serial_number": "00000000000000000000", 00:23:18.703 "model_number": "SPDK bdev Controller", 00:23:18.703 "max_namespaces": 32, 00:23:18.703 "min_cntlid": 1, 00:23:18.703 "max_cntlid": 65519, 00:23:18.703 "ana_reporting": false 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "nvmf_subsystem_add_host", 00:23:18.703 "params": { 00:23:18.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.703 "host": "nqn.2016-06.io.spdk:host1", 00:23:18.703 "psk": "key0" 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "nvmf_subsystem_add_ns", 00:23:18.703 "params": { 00:23:18.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.703 "namespace": { 00:23:18.703 "nsid": 1, 00:23:18.703 "bdev_name": "malloc0", 00:23:18.703 "nguid": "615FAF087E67458985E422ADF6DBA3DA", 00:23:18.703 "uuid": "615faf08-7e67-4589-85e4-22adf6dba3da", 00:23:18.703 "no_auto_visible": false 00:23:18.703 } 00:23:18.703 } 00:23:18.703 }, 00:23:18.703 { 00:23:18.703 "method": "nvmf_subsystem_add_listener", 00:23:18.703 "params": { 00:23:18.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.703 "listen_address": { 00:23:18.703 "trtype": "TCP", 00:23:18.703 "adrfam": "IPv4", 00:23:18.703 "traddr": "10.0.0.2", 00:23:18.703 "trsvcid": "4420" 00:23:18.703 }, 00:23:18.703 "secure_channel": true 00:23:18.703 } 00:23:18.703 } 00:23:18.703 ] 00:23:18.704 } 00:23:18.704 ] 00:23:18.704 }' 00:23:18.704 00:48:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=472077 00:23:18.704 00:48:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 472077 00:23:18.704 00:48:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:18.704 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 472077 ']' 00:23:18.704 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.704 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:18.704 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.704 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:18.704 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.704 [2024-06-08 00:48:36.873849] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:18.704 [2024-06-08 00:48:36.873905] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.704 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.704 [2024-06-08 00:48:36.938031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.964 [2024-06-08 00:48:37.002528] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.964 [2024-06-08 00:48:37.002564] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.964 [2024-06-08 00:48:37.002571] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.964 [2024-06-08 00:48:37.002578] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.964 [2024-06-08 00:48:37.002583] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.964 [2024-06-08 00:48:37.002634] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.964 [2024-06-08 00:48:37.199816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.964 [2024-06-08 00:48:37.231820] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:19.224 [2024-06-08 00:48:37.253713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=472394 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 472394 /var/tmp/bdevperf.sock 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 472394 ']' 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.485 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:19.485 "subsystems": [ 00:23:19.485 { 00:23:19.485 "subsystem": "keyring", 00:23:19.485 "config": [ 00:23:19.485 { 00:23:19.485 "method": "keyring_file_add_key", 00:23:19.485 "params": { 00:23:19.485 "name": "key0", 00:23:19.485 "path": "/tmp/tmp.JxAeA3gswv" 00:23:19.485 } 00:23:19.485 } 00:23:19.485 ] 00:23:19.485 }, 00:23:19.485 { 00:23:19.485 "subsystem": "iobuf", 00:23:19.485 "config": [ 00:23:19.485 { 00:23:19.485 "method": "iobuf_set_options", 00:23:19.485 "params": { 00:23:19.485 "small_pool_count": 8192, 00:23:19.485 "large_pool_count": 1024, 00:23:19.485 "small_bufsize": 8192, 00:23:19.485 "large_bufsize": 135168 00:23:19.485 } 00:23:19.485 } 00:23:19.485 ] 00:23:19.485 }, 00:23:19.485 { 00:23:19.485 "subsystem": "sock", 00:23:19.485 "config": [ 00:23:19.485 { 00:23:19.485 "method": "sock_set_default_impl", 00:23:19.485 "params": { 00:23:19.485 "impl_name": "posix" 00:23:19.485 } 00:23:19.485 }, 00:23:19.485 { 00:23:19.485 "method": "sock_impl_set_options", 00:23:19.485 "params": { 00:23:19.485 "impl_name": "ssl", 00:23:19.485 "recv_buf_size": 4096, 00:23:19.485 "send_buf_size": 4096, 00:23:19.485 "enable_recv_pipe": true, 00:23:19.485 "enable_quickack": false, 00:23:19.485 "enable_placement_id": 0, 00:23:19.485 "enable_zerocopy_send_server": true, 00:23:19.485 "enable_zerocopy_send_client": false, 00:23:19.485 "zerocopy_threshold": 0, 00:23:19.485 "tls_version": 0, 00:23:19.485 "enable_ktls": false 00:23:19.485 } 00:23:19.485 }, 00:23:19.485 { 00:23:19.485 "method": "sock_impl_set_options", 00:23:19.485 "params": { 00:23:19.485 "impl_name": "posix", 00:23:19.485 "recv_buf_size": 2097152, 00:23:19.485 "send_buf_size": 2097152, 00:23:19.485 "enable_recv_pipe": true, 00:23:19.485 "enable_quickack": false, 00:23:19.485 "enable_placement_id": 0, 00:23:19.485 "enable_zerocopy_send_server": true, 00:23:19.485 "enable_zerocopy_send_client": false, 00:23:19.485 "zerocopy_threshold": 0, 00:23:19.485 "tls_version": 0, 00:23:19.485 "enable_ktls": false 00:23:19.485 } 00:23:19.485 } 00:23:19.485 ] 00:23:19.485 }, 00:23:19.485 { 00:23:19.485 "subsystem": "vmd", 00:23:19.485 "config": [] 00:23:19.485 }, 00:23:19.485 { 00:23:19.485 "subsystem": "accel", 00:23:19.485 "config": [ 00:23:19.485 { 00:23:19.485 "method": "accel_set_options", 00:23:19.485 "params": { 00:23:19.485 "small_cache_size": 128, 00:23:19.485 "large_cache_size": 16, 00:23:19.485 "task_count": 2048, 00:23:19.485 "sequence_count": 2048, 00:23:19.485 "buf_count": 2048 00:23:19.485 } 00:23:19.485 } 00:23:19.485 ] 00:23:19.485 }, 00:23:19.485 { 00:23:19.485 "subsystem": "bdev", 00:23:19.485 "config": [ 00:23:19.485 { 00:23:19.485 "method": "bdev_set_options", 00:23:19.485 "params": { 00:23:19.485 "bdev_io_pool_size": 65535, 00:23:19.485 "bdev_io_cache_size": 256, 00:23:19.485 "bdev_auto_examine": true, 00:23:19.485 "iobuf_small_cache_size": 128, 00:23:19.485 "iobuf_large_cache_size": 16 00:23:19.485 } 00:23:19.485 }, 00:23:19.485 { 00:23:19.485 "method": "bdev_raid_set_options", 00:23:19.485 "params": { 00:23:19.485 "process_window_size_kb": 1024 00:23:19.485 } 00:23:19.485 }, 00:23:19.485 { 00:23:19.485 "method": "bdev_iscsi_set_options", 00:23:19.485 "params": { 00:23:19.485 "timeout_sec": 30 00:23:19.485 } 00:23:19.485 }, 00:23:19.485 { 00:23:19.485 "method": "bdev_nvme_set_options", 00:23:19.485 "params": { 00:23:19.485 "action_on_timeout": "none", 00:23:19.485 "timeout_us": 0, 00:23:19.485 "timeout_admin_us": 0, 00:23:19.485 "keep_alive_timeout_ms": 10000, 00:23:19.486 "arbitration_burst": 0, 00:23:19.486 "low_priority_weight": 0, 00:23:19.486 "medium_priority_weight": 0, 00:23:19.486 "high_priority_weight": 0, 00:23:19.486 "nvme_adminq_poll_period_us": 10000, 00:23:19.486 "nvme_ioq_poll_period_us": 0, 00:23:19.486 "io_queue_requests": 512, 00:23:19.486 "delay_cmd_submit": true, 00:23:19.486 "transport_retry_count": 4, 00:23:19.486 "bdev_retry_count": 3, 00:23:19.486 "transport_ack_timeout": 0, 00:23:19.486 "ctrlr_loss_timeout_sec": 0, 00:23:19.486 "reconnect_delay_sec": 0, 00:23:19.486 "fast_io_fail_timeout_sec": 0, 00:23:19.486 "disable_auto_failback": false, 00:23:19.486 "generate_uuids": false, 00:23:19.486 "transport_tos": 0, 00:23:19.486 "nvme_error_stat": false, 00:23:19.486 "rdma_srq_size": 0, 00:23:19.486 "io_path_stat": false, 00:23:19.486 "allow_accel_sequence": false, 00:23:19.486 "rdma_max_cq_size": 0, 00:23:19.486 "rdma_cm_event_timeout_ms": 0, 00:23:19.486 "dhchap_digests": [ 00:23:19.486 "sha256", 00:23:19.486 "sha384", 00:23:19.486 "sha512" 00:23:19.486 ], 00:23:19.486 "dhchap_dhgroups": [ 00:23:19.486 "null", 00:23:19.486 "ffdhe2048", 00:23:19.486 "ffdhe3072", 00:23:19.486 "ffdhe4096", 00:23:19.486 "ffdhe6144", 00:23:19.486 "ffdhe8192" 00:23:19.486 ] 00:23:19.486 } 00:23:19.486 }, 00:23:19.486 { 00:23:19.486 "method": "bdev_nvme_attach_controller", 00:23:19.486 "params": { 00:23:19.486 "name": "nvme0", 00:23:19.486 "trtype": "TCP", 00:23:19.486 "adrfam": "IPv4", 00:23:19.486 "traddr": "10.0.0.2", 00:23:19.486 "trsvcid": "4420", 00:23:19.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.486 "prchk_reftag": false, 00:23:19.486 "prchk_guard": false, 00:23:19.486 "ctrlr_loss_timeout_sec": 0, 00:23:19.486 "reconnect_delay_sec": 0, 00:23:19.486 "fast_io_fail_timeout_sec": 0, 00:23:19.486 "psk": "key0", 00:23:19.486 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.486 "hdgst": false, 00:23:19.486 "ddgst": false 00:23:19.486 } 00:23:19.486 }, 00:23:19.486 { 00:23:19.486 "method": "bdev_nvme_set_hotplug", 00:23:19.486 "params": { 00:23:19.486 "period_us": 100000, 00:23:19.486 "enable": false 00:23:19.486 } 00:23:19.486 }, 00:23:19.486 { 00:23:19.486 "method": "bdev_enable_histogram", 00:23:19.486 "params": { 00:23:19.486 "name": "nvme0n1", 00:23:19.486 "enable": true 00:23:19.486 } 00:23:19.486 }, 00:23:19.486 { 00:23:19.486 "method": "bdev_wait_for_examine" 00:23:19.486 } 00:23:19.486 ] 00:23:19.486 }, 00:23:19.486 { 00:23:19.486 "subsystem": "nbd", 00:23:19.486 "config": [] 00:23:19.486 } 00:23:19.486 ] 00:23:19.486 }' 00:23:19.486 [2024-06-08 00:48:37.717565] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:19.486 [2024-06-08 00:48:37.717613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472394 ] 00:23:19.486 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.746 [2024-06-08 00:48:37.791493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.746 [2024-06-08 00:48:37.845049] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.746 [2024-06-08 00:48:37.978521] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.317 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:20.317 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:20.317 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.317 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:20.577 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.577 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:20.577 Running I/O for 1 seconds... 00:23:21.517 00:23:21.517 Latency(us) 00:23:21.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.517 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:21.517 Verification LBA range: start 0x0 length 0x2000 00:23:21.517 nvme0n1 : 1.06 2508.64 9.80 0.00 0.00 49825.40 4805.97 111848.11 00:23:21.517 =================================================================================================================== 00:23:21.517 Total : 2508.64 9.80 0.00 0.00 49825.40 4805.97 111848.11 00:23:21.517 0 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:23:21.517 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:21.778 nvmf_trace.0 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 472394 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 472394 ']' 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 472394 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 472394 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 472394' 00:23:21.778 killing process with pid 472394 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 472394 00:23:21.778 Received shutdown signal, test time was about 1.000000 seconds 00:23:21.778 00:23:21.778 Latency(us) 00:23:21.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.778 =================================================================================================================== 00:23:21.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.778 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 472394 00:23:21.778 00:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:21.778 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.778 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:21.778 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.778 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:21.778 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.778 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.778 rmmod nvme_tcp 00:23:22.039 rmmod nvme_fabrics 00:23:22.039 rmmod nvme_keyring 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 472077 ']' 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 472077 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 472077 ']' 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 472077 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 472077 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 472077' 00:23:22.039 killing process with pid 472077 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 472077 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 472077 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.039 00:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.584 00:48:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:24.584 00:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.0EoXo58myd /tmp/tmp.2KgPmZeUE3 /tmp/tmp.JxAeA3gswv 00:23:24.584 00:23:24.584 real 1m23.703s 00:23:24.584 user 2m7.132s 00:23:24.584 sys 0m28.774s 00:23:24.584 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:24.584 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.584 ************************************ 00:23:24.584 END TEST nvmf_tls 00:23:24.584 ************************************ 00:23:24.584 00:48:42 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:24.584 00:48:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:24.584 00:48:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:24.584 00:48:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.584 ************************************ 00:23:24.584 START TEST nvmf_fips 00:23:24.584 ************************************ 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:24.585 * Looking for test storage... 00:23:24.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:24.585 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:23:24.586 Error setting digest 00:23:24.586 0092306C407F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:24.586 0092306C407F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:24.586 00:48:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:32.725 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:32.725 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.725 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:32.726 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:32.726 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:32.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:23:32.726 00:23:32.726 --- 10.0.0.2 ping statistics --- 00:23:32.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.726 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:23:32.726 00:23:32.726 --- 10.0.0.1 ping statistics --- 00:23:32.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.726 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=477095 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 477095 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 477095 ']' 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:32.726 00:48:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:32.726 [2024-06-08 00:48:50.018475] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:32.726 [2024-06-08 00:48:50.018526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.726 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.726 [2024-06-08 00:48:50.099112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.726 [2024-06-08 00:48:50.168620] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.726 [2024-06-08 00:48:50.168664] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.726 [2024-06-08 00:48:50.168677] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.726 [2024-06-08 00:48:50.168683] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.726 [2024-06-08 00:48:50.168689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.726 [2024-06-08 00:48:50.168716] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:32.726 00:48:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:32.726 [2024-06-08 00:48:50.997918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.987 [2024-06-08 00:48:51.013924] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.987 [2024-06-08 00:48:51.014224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.987 [2024-06-08 00:48:51.044023] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:32.987 malloc0 00:23:32.987 00:48:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.987 00:48:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=477230 00:23:32.987 00:48:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 477230 /var/tmp/bdevperf.sock 00:23:32.987 00:48:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 477230 ']' 00:23:32.987 00:48:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.987 00:48:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:32.987 00:48:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.987 00:48:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:32.987 00:48:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:32.987 00:48:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.987 [2024-06-08 00:48:51.153762] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:32.987 [2024-06-08 00:48:51.153832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477230 ] 00:23:32.987 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.987 [2024-06-08 00:48:51.208726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.248 [2024-06-08 00:48:51.273326] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.817 00:48:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:33.817 00:48:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:23:33.817 00:48:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:33.817 [2024-06-08 00:48:52.036937] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.817 [2024-06-08 00:48:52.036998] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:34.077 TLSTESTn1 00:23:34.077 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.077 Running I/O for 10 seconds... 00:23:44.113 00:23:44.113 Latency(us) 00:23:44.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.113 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:44.113 Verification LBA range: start 0x0 length 0x2000 00:23:44.113 TLSTESTn1 : 10.03 2712.47 10.60 0.00 0.00 47116.80 6089.39 103109.97 00:23:44.113 =================================================================================================================== 00:23:44.113 Total : 2712.47 10.60 0.00 0.00 47116.80 6089.39 103109.97 00:23:44.113 0 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:44.113 nvmf_trace.0 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 477230 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 477230 ']' 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 477230 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:44.113 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 477230 00:23:44.373 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:23:44.373 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:23:44.373 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 477230' 00:23:44.373 killing process with pid 477230 00:23:44.373 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 477230 00:23:44.373 Received shutdown signal, test time was about 10.000000 seconds 00:23:44.374 00:23:44.374 Latency(us) 00:23:44.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.374 =================================================================================================================== 00:23:44.374 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.374 [2024-06-08 00:49:02.425393] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 477230 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.374 rmmod nvme_tcp 00:23:44.374 rmmod nvme_fabrics 00:23:44.374 rmmod nvme_keyring 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 477095 ']' 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 477095 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 477095 ']' 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 477095 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 477095 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 477095' 00:23:44.374 killing process with pid 477095 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 477095 00:23:44.374 [2024-06-08 00:49:02.652525] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:44.374 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 477095 00:23:44.634 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:44.634 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:44.634 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:44.634 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.634 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.634 00:49:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.634 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.634 00:49:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.546 00:49:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.546 00:49:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:46.546 00:23:46.546 real 0m22.356s 00:23:46.546 user 0m22.938s 00:23:46.546 sys 0m10.051s 00:23:46.546 00:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:46.546 00:49:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:46.546 ************************************ 00:23:46.546 END TEST nvmf_fips 00:23:46.546 ************************************ 00:23:46.808 00:49:04 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:46.808 00:49:04 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:46.808 00:49:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:46.808 00:49:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:46.808 00:49:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:46.808 ************************************ 00:23:46.808 START TEST nvmf_fuzz 00:23:46.808 ************************************ 00:23:46.808 00:49:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:46.808 * Looking for test storage... 00:23:46.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:46.808 00:49:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.808 00:49:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.808 00:49:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:53.395 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:53.395 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:53.395 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.395 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:53.396 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.396 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.656 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.656 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.656 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:53.656 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.656 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.656 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.656 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:53.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:23:53.656 00:23:53.656 --- 10.0.0.2 ping statistics --- 00:23:53.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.656 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:23:53.656 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:23:53.917 00:23:53.917 --- 10.0.0.1 ping statistics --- 00:23:53.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.917 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=483473 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 483473 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@830 -- # '[' -z 483473 ']' 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:53.917 00:49:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@863 -- # return 0 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.861 Malloc0 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:54.861 00:49:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:26.976 Fuzzing completed. Shutting down the fuzz application 00:24:26.976 00:24:26.976 Dumping successful admin opcodes: 00:24:26.976 8, 9, 10, 24, 00:24:26.976 Dumping successful io opcodes: 00:24:26.976 0, 9, 00:24:26.976 NS: 0x200003aeff00 I/O qp, Total commands completed: 932782, total successful commands: 5437, random_seed: 389226944 00:24:26.976 NS: 0x200003aeff00 admin qp, Total commands completed: 116932, total successful commands: 957, random_seed: 4154217344 00:24:26.976 00:49:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:26.976 Fuzzing completed. Shutting down the fuzz application 00:24:26.976 00:24:26.976 Dumping successful admin opcodes: 00:24:26.976 24, 00:24:26.976 Dumping successful io opcodes: 00:24:26.976 00:24:26.976 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3794197829 00:24:26.976 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3794265657 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:26.976 rmmod nvme_tcp 00:24:26.976 rmmod nvme_fabrics 00:24:26.976 rmmod nvme_keyring 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 483473 ']' 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 483473 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@949 -- # '[' -z 483473 ']' 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # kill -0 483473 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # uname 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 483473 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 483473' 00:24:26.976 killing process with pid 483473 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@968 -- # kill 483473 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@973 -- # wait 483473 00:24:26.976 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.977 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.977 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.977 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.977 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.977 00:49:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.977 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.977 00:49:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.927 00:49:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.927 00:49:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:28.927 00:24:28.927 real 0m42.086s 00:24:28.927 user 0m56.195s 00:24:28.927 sys 0m15.459s 00:24:28.927 00:49:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:28.927 00:49:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.927 ************************************ 00:24:28.927 END TEST nvmf_fuzz 00:24:28.927 ************************************ 00:24:28.927 00:49:47 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:28.927 00:49:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:28.927 00:49:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:28.927 00:49:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.927 ************************************ 00:24:28.927 START TEST nvmf_multiconnection 00:24:28.927 ************************************ 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:28.927 * Looking for test storage... 00:24:28.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.927 00:49:47 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.928 00:49:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:37.099 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:37.099 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.099 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:37.100 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:37.100 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:24:37.100 00:24:37.100 --- 10.0.0.2 ping statistics --- 00:24:37.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.100 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:24:37.100 00:24:37.100 --- 10.0.0.1 ping statistics --- 00:24:37.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.100 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=494052 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 494052 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@830 -- # '[' -z 494052 ']' 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:37.100 00:49:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.100 [2024-06-08 00:49:54.482374] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:24:37.100 [2024-06-08 00:49:54.482447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.100 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.100 [2024-06-08 00:49:54.553148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.100 [2024-06-08 00:49:54.629437] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.100 [2024-06-08 00:49:54.629475] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.100 [2024-06-08 00:49:54.629482] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.100 [2024-06-08 00:49:54.629489] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.100 [2024-06-08 00:49:54.629495] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.100 [2024-06-08 00:49:54.629634] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.100 [2024-06-08 00:49:54.629748] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.100 [2024-06-08 00:49:54.629904] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.100 [2024-06-08 00:49:54.629905] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@863 -- # return 0 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.100 [2024-06-08 00:49:55.313001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.100 Malloc1 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.100 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.101 [2024-06-08 00:49:55.377886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 Malloc2 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 Malloc3 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 Malloc4 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 Malloc5 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.362 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.363 Malloc6 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.363 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 Malloc7 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 Malloc8 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 Malloc9 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 Malloc10 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 Malloc11 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.624 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.885 00:49:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:39.268 00:49:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:39.268 00:49:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:39.268 00:49:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.268 00:49:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:39.268 00:49:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:41.808 00:49:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:41.808 00:49:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:41.808 00:49:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK1 00:24:41.808 00:49:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:41.808 00:49:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:41.808 00:49:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:41.808 00:49:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.808 00:49:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:42.748 00:50:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:42.748 00:50:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:42.748 00:50:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:42.748 00:50:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:42.748 00:50:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:45.288 00:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:45.288 00:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:45.288 00:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK2 00:24:45.288 00:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:45.288 00:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.288 00:50:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:45.288 00:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:45.288 00:50:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:46.669 00:50:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:46.669 00:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:46.669 00:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.669 00:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:46.669 00:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:48.580 00:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:48.580 00:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:48.580 00:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK3 00:24:48.580 00:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:48.580 00:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.580 00:50:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:48.580 00:50:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.580 00:50:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:50.492 00:50:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:50.492 00:50:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:50.492 00:50:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.492 00:50:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:50.492 00:50:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:52.402 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:52.402 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:52.402 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK4 00:24:52.402 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:52.402 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.402 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:52.402 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.402 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:53.784 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:53.784 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:53.784 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.784 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:53.784 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:55.696 00:50:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:55.696 00:50:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:55.696 00:50:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK5 00:24:55.696 00:50:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:55.696 00:50:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.696 00:50:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:55.696 00:50:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.696 00:50:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:57.606 00:50:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:57.606 00:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:24:57.607 00:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.607 00:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:57.607 00:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:24:59.562 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:59.562 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:59.562 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK6 00:24:59.562 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:59.562 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.562 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:24:59.562 00:50:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.562 00:50:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:01.478 00:50:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:01.478 00:50:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:25:01.478 00:50:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.478 00:50:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:25:01.478 00:50:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:25:03.392 00:50:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:25:03.392 00:50:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:03.392 00:50:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK7 00:25:03.392 00:50:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:25:03.392 00:50:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.392 00:50:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:25:03.392 00:50:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.392 00:50:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:05.304 00:50:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:05.304 00:50:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:25:05.304 00:50:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:25:05.304 00:50:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:25:05.304 00:50:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:25:07.217 00:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:25:07.217 00:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.217 00:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK8 00:25:07.217 00:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:25:07.217 00:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.217 00:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:25:07.218 00:50:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.218 00:50:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:09.137 00:50:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:09.138 00:50:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:25:09.138 00:50:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.138 00:50:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:25:09.138 00:50:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:25:11.050 00:50:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:25:11.050 00:50:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:11.050 00:50:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK9 00:25:11.050 00:50:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:25:11.050 00:50:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.050 00:50:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:25:11.050 00:50:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.050 00:50:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:12.964 00:50:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:12.964 00:50:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:25:12.964 00:50:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.964 00:50:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:25:12.964 00:50:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:25:14.879 00:50:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:25:14.879 00:50:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:14.879 00:50:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK10 00:25:14.879 00:50:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:25:14.879 00:50:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.879 00:50:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:25:14.879 00:50:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.879 00:50:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:16.790 00:50:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:16.790 00:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:25:16.790 00:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:25:16.790 00:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:25:16.790 00:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:25:18.700 00:50:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:25:18.701 00:50:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:18.701 00:50:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK11 00:25:18.701 00:50:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:25:18.701 00:50:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.701 00:50:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:25:18.701 00:50:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:18.701 [global] 00:25:18.701 thread=1 00:25:18.701 invalidate=1 00:25:18.701 rw=read 00:25:18.701 time_based=1 00:25:18.701 runtime=10 00:25:18.701 ioengine=libaio 00:25:18.701 direct=1 00:25:18.701 bs=262144 00:25:18.701 iodepth=64 00:25:18.701 norandommap=1 00:25:18.701 numjobs=1 00:25:18.701 00:25:18.701 [job0] 00:25:18.701 filename=/dev/nvme0n1 00:25:18.701 [job1] 00:25:18.701 filename=/dev/nvme10n1 00:25:18.701 [job2] 00:25:18.701 filename=/dev/nvme1n1 00:25:18.701 [job3] 00:25:18.701 filename=/dev/nvme2n1 00:25:18.701 [job4] 00:25:18.701 filename=/dev/nvme3n1 00:25:18.701 [job5] 00:25:18.701 filename=/dev/nvme4n1 00:25:18.701 [job6] 00:25:18.701 filename=/dev/nvme5n1 00:25:18.701 [job7] 00:25:18.701 filename=/dev/nvme6n1 00:25:18.701 [job8] 00:25:18.701 filename=/dev/nvme7n1 00:25:18.701 [job9] 00:25:18.701 filename=/dev/nvme8n1 00:25:18.701 [job10] 00:25:18.701 filename=/dev/nvme9n1 00:25:18.985 Could not set queue depth (nvme0n1) 00:25:18.985 Could not set queue depth (nvme10n1) 00:25:18.985 Could not set queue depth (nvme1n1) 00:25:18.985 Could not set queue depth (nvme2n1) 00:25:18.985 Could not set queue depth (nvme3n1) 00:25:18.985 Could not set queue depth (nvme4n1) 00:25:18.985 Could not set queue depth (nvme5n1) 00:25:18.985 Could not set queue depth (nvme6n1) 00:25:18.985 Could not set queue depth (nvme7n1) 00:25:18.985 Could not set queue depth (nvme8n1) 00:25:18.985 Could not set queue depth (nvme9n1) 00:25:19.250 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.250 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.250 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.250 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.250 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.250 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.250 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.250 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.250 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.250 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.251 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:19.251 fio-3.35 00:25:19.251 Starting 11 threads 00:25:31.552 00:25:31.552 job0: (groupid=0, jobs=1): err= 0: pid=502539: Sat Jun 8 00:50:48 2024 00:25:31.552 read: IOPS=945, BW=236MiB/s (248MB/s)(2380MiB/10071msec) 00:25:31.552 slat (usec): min=5, max=84192, avg=803.21, stdev=2945.47 00:25:31.552 clat (msec): min=2, max=176, avg=66.81, stdev=30.49 00:25:31.552 lat (msec): min=2, max=192, avg=67.61, stdev=30.93 00:25:31.552 clat percentiles (msec): 00:25:31.552 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 27], 20.00th=[ 36], 00:25:31.552 | 30.00th=[ 47], 40.00th=[ 58], 50.00th=[ 71], 60.00th=[ 79], 00:25:31.552 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 117], 00:25:31.552 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 161], 00:25:31.552 | 99.99th=[ 176] 00:25:31.552 bw ( KiB/s): min=163001, max=335872, per=10.22%, avg=242123.30, stdev=51821.79, samples=20 00:25:31.552 iops : min= 636, max= 1312, avg=945.70, stdev=202.54, samples=20 00:25:31.552 lat (msec) : 4=0.15%, 10=0.96%, 20=3.47%, 50=29.43%, 100=52.12% 00:25:31.552 lat (msec) : 250=13.89% 00:25:31.552 cpu : usr=0.48%, sys=2.98%, ctx=2640, majf=0, minf=4097 00:25:31.552 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:31.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.553 issued rwts: total=9521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.553 job1: (groupid=0, jobs=1): err= 0: pid=502540: Sat Jun 8 00:50:48 2024 00:25:31.553 read: IOPS=753, BW=188MiB/s (198MB/s)(1900MiB/10086msec) 00:25:31.553 slat (usec): min=6, max=81334, avg=1043.83, stdev=3499.11 00:25:31.553 clat (msec): min=3, max=196, avg=83.78, stdev=34.48 00:25:31.553 lat (msec): min=3, max=196, avg=84.82, stdev=34.95 00:25:31.553 clat percentiles (msec): 00:25:31.553 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 29], 20.00th=[ 53], 00:25:31.553 | 30.00th=[ 70], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 99], 00:25:31.553 | 70.00th=[ 107], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 128], 00:25:31.553 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 188], 99.95th=[ 197], 00:25:31.553 | 99.99th=[ 197] 00:25:31.553 bw ( KiB/s): min=132608, max=342355, per=8.15%, avg=193053.00, stdev=58480.24, samples=20 00:25:31.553 iops : min= 518, max= 1337, avg=753.95, stdev=228.34, samples=20 00:25:31.553 lat (msec) : 4=0.03%, 10=1.43%, 20=5.42%, 50=12.09%, 100=43.70% 00:25:31.553 lat (msec) : 250=37.32% 00:25:31.553 cpu : usr=0.34%, sys=2.50%, ctx=1949, majf=0, minf=4097 00:25:31.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:31.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.553 issued rwts: total=7601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.553 job2: (groupid=0, jobs=1): err= 0: pid=502541: Sat Jun 8 00:50:48 2024 00:25:31.553 read: IOPS=761, BW=190MiB/s (200MB/s)(1920MiB/10083msec) 00:25:31.553 slat (usec): min=6, max=79873, avg=1075.67, stdev=3338.61 00:25:31.553 clat (msec): min=4, max=194, avg=82.87, stdev=31.50 00:25:31.553 lat (msec): min=4, max=194, avg=83.95, stdev=32.04 00:25:31.553 clat percentiles (msec): 00:25:31.553 | 1.00th=[ 16], 5.00th=[ 28], 10.00th=[ 34], 20.00th=[ 55], 00:25:31.553 | 30.00th=[ 67], 40.00th=[ 77], 50.00th=[ 87], 60.00th=[ 96], 00:25:31.553 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 126], 00:25:31.553 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 184], 99.95th=[ 188], 00:25:31.553 | 99.99th=[ 194] 00:25:31.553 bw ( KiB/s): min=137490, max=274981, per=8.23%, avg=195122.75, stdev=47402.66, samples=20 00:25:31.553 iops : min= 537, max= 1074, avg=762.05, stdev=185.12, samples=20 00:25:31.553 lat (msec) : 10=0.33%, 20=1.97%, 50=15.44%, 100=47.38%, 250=34.89% 00:25:31.553 cpu : usr=0.26%, sys=2.42%, ctx=2014, majf=0, minf=4097 00:25:31.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:31.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.553 issued rwts: total=7679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.553 job3: (groupid=0, jobs=1): err= 0: pid=502542: Sat Jun 8 00:50:48 2024 00:25:31.553 read: IOPS=920, BW=230MiB/s (241MB/s)(2322MiB/10089msec) 00:25:31.553 slat (usec): min=5, max=84965, avg=797.41, stdev=3647.20 00:25:31.553 clat (msec): min=2, max=190, avg=68.64, stdev=37.87 00:25:31.553 lat (msec): min=2, max=192, avg=69.43, stdev=38.36 00:25:31.553 clat percentiles (msec): 00:25:31.553 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 29], 00:25:31.553 | 30.00th=[ 41], 40.00th=[ 58], 50.00th=[ 70], 60.00th=[ 83], 00:25:31.553 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 125], 00:25:31.553 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 178], 99.95th=[ 186], 00:25:31.553 | 99.99th=[ 190] 00:25:31.553 bw ( KiB/s): min=121074, max=406016, per=9.97%, avg=236224.70, stdev=76631.09, samples=20 00:25:31.553 iops : min= 472, max= 1586, avg=922.50, stdev=299.42, samples=20 00:25:31.553 lat (msec) : 4=0.18%, 10=4.51%, 20=8.01%, 50=22.40%, 100=39.03% 00:25:31.553 lat (msec) : 250=25.86% 00:25:31.553 cpu : usr=0.42%, sys=2.84%, ctx=2414, majf=0, minf=4097 00:25:31.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:31.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.553 issued rwts: total=9287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.553 job4: (groupid=0, jobs=1): err= 0: pid=502543: Sat Jun 8 00:50:48 2024 00:25:31.553 read: IOPS=822, BW=206MiB/s (215MB/s)(2073MiB/10086msec) 00:25:31.553 slat (usec): min=5, max=94088, avg=1016.14, stdev=3360.21 00:25:31.553 clat (msec): min=2, max=200, avg=76.72, stdev=32.95 00:25:31.553 lat (msec): min=2, max=212, avg=77.74, stdev=33.50 00:25:31.553 clat percentiles (msec): 00:25:31.553 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 40], 00:25:31.553 | 30.00th=[ 59], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 92], 00:25:31.553 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 120], 00:25:31.553 | 99.00th=[ 140], 99.50th=[ 155], 99.90th=[ 190], 99.95th=[ 197], 00:25:31.553 | 99.99th=[ 201] 00:25:31.553 bw ( KiB/s): min=132360, max=396288, per=8.89%, avg=210629.70, stdev=75328.43, samples=20 00:25:31.553 iops : min= 517, max= 1548, avg=822.70, stdev=294.29, samples=20 00:25:31.553 lat (msec) : 4=0.27%, 10=1.06%, 20=2.38%, 50=21.55%, 100=47.18% 00:25:31.553 lat (msec) : 250=27.56% 00:25:31.553 cpu : usr=0.40%, sys=2.56%, ctx=2133, majf=0, minf=3534 00:25:31.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:31.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.553 issued rwts: total=8291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.553 job5: (groupid=0, jobs=1): err= 0: pid=502550: Sat Jun 8 00:50:48 2024 00:25:31.553 read: IOPS=968, BW=242MiB/s (254MB/s)(2428MiB/10024msec) 00:25:31.553 slat (usec): min=6, max=64337, avg=995.43, stdev=2660.05 00:25:31.553 clat (msec): min=3, max=176, avg=65.01, stdev=24.46 00:25:31.553 lat (msec): min=3, max=176, avg=66.01, stdev=24.82 00:25:31.553 clat percentiles (msec): 00:25:31.553 | 1.00th=[ 25], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 45], 00:25:31.553 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 66], 00:25:31.553 | 70.00th=[ 77], 80.00th=[ 90], 90.00th=[ 103], 95.00th=[ 110], 00:25:31.553 | 99.00th=[ 124], 99.50th=[ 138], 99.90th=[ 153], 99.95th=[ 157], 00:25:31.553 | 99.99th=[ 178] 00:25:31.553 bw ( KiB/s): min=130560, max=403968, per=10.42%, avg=246988.80, stdev=76848.56, samples=20 00:25:31.553 iops : min= 510, max= 1578, avg=964.80, stdev=300.19, samples=20 00:25:31.553 lat (msec) : 4=0.01%, 10=0.08%, 20=0.58%, 50=31.46%, 100=56.32% 00:25:31.553 lat (msec) : 250=11.55% 00:25:31.553 cpu : usr=0.47%, sys=3.31%, ctx=2138, majf=0, minf=4097 00:25:31.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:31.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.553 issued rwts: total=9711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.553 job6: (groupid=0, jobs=1): err= 0: pid=502557: Sat Jun 8 00:50:48 2024 00:25:31.553 read: IOPS=1046, BW=262MiB/s (274MB/s)(2623MiB/10025msec) 00:25:31.553 slat (usec): min=5, max=93840, avg=861.68, stdev=2801.41 00:25:31.553 clat (msec): min=2, max=174, avg=60.21, stdev=26.75 00:25:31.553 lat (msec): min=2, max=210, avg=61.07, stdev=27.17 00:25:31.553 clat percentiles (msec): 00:25:31.553 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 41], 00:25:31.553 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 60], 00:25:31.553 | 70.00th=[ 68], 80.00th=[ 81], 90.00th=[ 102], 95.00th=[ 115], 00:25:31.553 | 99.00th=[ 136], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 161], 00:25:31.553 | 99.99th=[ 171] 00:25:31.553 bw ( KiB/s): min=126211, max=355840, per=11.27%, avg=267135.15, stdev=63178.08, samples=20 00:25:31.553 iops : min= 493, max= 1390, avg=1043.35, stdev=246.78, samples=20 00:25:31.553 lat (msec) : 4=0.05%, 10=0.87%, 20=3.02%, 50=34.70%, 100=51.01% 00:25:31.553 lat (msec) : 250=10.36% 00:25:31.553 cpu : usr=0.35%, sys=3.37%, ctx=2478, majf=0, minf=4097 00:25:31.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:31.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.553 issued rwts: total=10493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.553 job7: (groupid=0, jobs=1): err= 0: pid=502562: Sat Jun 8 00:50:48 2024 00:25:31.553 read: IOPS=785, BW=196MiB/s (206MB/s)(1979MiB/10073msec) 00:25:31.553 slat (usec): min=7, max=73314, avg=1233.76, stdev=3226.16 00:25:31.553 clat (msec): min=11, max=166, avg=80.10, stdev=22.51 00:25:31.553 lat (msec): min=11, max=166, avg=81.33, stdev=22.83 00:25:31.553 clat percentiles (msec): 00:25:31.553 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 59], 00:25:31.553 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 83], 60.00th=[ 87], 00:25:31.553 | 70.00th=[ 92], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 115], 00:25:31.553 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 150], 99.95th=[ 159], 00:25:31.553 | 99.99th=[ 167] 00:25:31.553 bw ( KiB/s): min=132608, max=382211, per=8.48%, avg=200961.75, stdev=54506.27, samples=20 00:25:31.553 iops : min= 518, max= 1493, avg=784.95, stdev=212.92, samples=20 00:25:31.553 lat (msec) : 20=0.19%, 50=11.37%, 100=70.36%, 250=18.08% 00:25:31.553 cpu : usr=0.41%, sys=2.78%, ctx=1762, majf=0, minf=4097 00:25:31.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:31.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.553 issued rwts: total=7914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.553 job8: (groupid=0, jobs=1): err= 0: pid=502578: Sat Jun 8 00:50:48 2024 00:25:31.553 read: IOPS=796, BW=199MiB/s (209MB/s)(2003MiB/10062msec) 00:25:31.553 slat (usec): min=5, max=79807, avg=1036.98, stdev=3273.06 00:25:31.553 clat (msec): min=4, max=185, avg=79.24, stdev=27.29 00:25:31.553 lat (msec): min=4, max=185, avg=80.28, stdev=27.71 00:25:31.554 clat percentiles (msec): 00:25:31.554 | 1.00th=[ 14], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 54], 00:25:31.554 | 30.00th=[ 67], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 85], 00:25:31.554 | 70.00th=[ 92], 80.00th=[ 103], 90.00th=[ 115], 95.00th=[ 126], 00:25:31.554 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 163], 00:25:31.554 | 99.99th=[ 186] 00:25:31.554 bw ( KiB/s): min=131334, max=286268, per=8.59%, avg=203626.05, stdev=48065.12, samples=20 00:25:31.554 iops : min= 513, max= 1118, avg=795.20, stdev=187.79, samples=20 00:25:31.554 lat (msec) : 10=0.44%, 20=0.99%, 50=14.98%, 100=61.56%, 250=22.04% 00:25:31.554 cpu : usr=0.25%, sys=2.68%, ctx=2033, majf=0, minf=4097 00:25:31.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:31.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.554 issued rwts: total=8013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.554 job9: (groupid=0, jobs=1): err= 0: pid=502585: Sat Jun 8 00:50:48 2024 00:25:31.554 read: IOPS=685, BW=171MiB/s (180MB/s)(1722MiB/10045msec) 00:25:31.554 slat (usec): min=8, max=59502, avg=1205.27, stdev=3530.94 00:25:31.554 clat (msec): min=8, max=154, avg=92.02, stdev=22.59 00:25:31.554 lat (msec): min=8, max=166, avg=93.22, stdev=22.85 00:25:31.554 clat percentiles (msec): 00:25:31.554 | 1.00th=[ 27], 5.00th=[ 54], 10.00th=[ 62], 20.00th=[ 74], 00:25:31.554 | 30.00th=[ 85], 40.00th=[ 90], 50.00th=[ 94], 60.00th=[ 100], 00:25:31.554 | 70.00th=[ 104], 80.00th=[ 110], 90.00th=[ 118], 95.00th=[ 128], 00:25:31.554 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 148], 99.95th=[ 153], 00:25:31.554 | 99.99th=[ 155] 00:25:31.554 bw ( KiB/s): min=126464, max=225792, per=7.37%, avg=174664.55, stdev=21881.39, samples=20 00:25:31.554 iops : min= 494, max= 882, avg=682.20, stdev=85.50, samples=20 00:25:31.554 lat (msec) : 10=0.07%, 20=0.52%, 50=3.37%, 100=58.70%, 250=37.34% 00:25:31.554 cpu : usr=0.25%, sys=2.27%, ctx=1699, majf=0, minf=4097 00:25:31.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:31.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.554 issued rwts: total=6886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.554 job10: (groupid=0, jobs=1): err= 0: pid=502590: Sat Jun 8 00:50:48 2024 00:25:31.554 read: IOPS=792, BW=198MiB/s (208MB/s)(2000MiB/10088msec) 00:25:31.554 slat (usec): min=5, max=58948, avg=1215.74, stdev=3463.52 00:25:31.554 clat (msec): min=11, max=203, avg=79.40, stdev=31.24 00:25:31.554 lat (msec): min=11, max=203, avg=80.62, stdev=31.72 00:25:31.554 clat percentiles (msec): 00:25:31.554 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 47], 00:25:31.554 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 83], 60.00th=[ 93], 00:25:31.554 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 123], 00:25:31.554 | 99.00th=[ 133], 99.50th=[ 150], 99.90th=[ 186], 99.95th=[ 192], 00:25:31.554 | 99.99th=[ 205] 00:25:31.554 bw ( KiB/s): min=131072, max=365568, per=8.57%, avg=203142.80, stdev=75204.18, samples=20 00:25:31.554 iops : min= 512, max= 1428, avg=793.45, stdev=293.81, samples=20 00:25:31.554 lat (msec) : 20=0.23%, 50=23.39%, 100=43.10%, 250=33.28% 00:25:31.554 cpu : usr=0.32%, sys=2.66%, ctx=1749, majf=0, minf=4097 00:25:31.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:31.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:31.554 issued rwts: total=7998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:31.554 00:25:31.554 Run status group 0 (all jobs): 00:25:31.554 READ: bw=2314MiB/s (2427MB/s), 171MiB/s-262MiB/s (180MB/s-274MB/s), io=22.8GiB (24.5GB), run=10024-10089msec 00:25:31.554 00:25:31.554 Disk stats (read/write): 00:25:31.554 nvme0n1: ios=18928/0, merge=0/0, ticks=1240973/0, in_queue=1240973, util=95.60% 00:25:31.554 nvme10n1: ios=15087/0, merge=0/0, ticks=1237850/0, in_queue=1237850, util=95.99% 00:25:31.554 nvme1n1: ios=15235/0, merge=0/0, ticks=1234503/0, in_queue=1234503, util=96.58% 00:25:31.554 nvme2n1: ios=18446/0, merge=0/0, ticks=1242072/0, in_queue=1242072, util=96.98% 00:25:31.554 nvme3n1: ios=16469/0, merge=0/0, ticks=1234279/0, in_queue=1234279, util=97.22% 00:25:31.554 nvme4n1: ios=19285/0, merge=0/0, ticks=1237652/0, in_queue=1237652, util=97.83% 00:25:31.554 nvme5n1: ios=20889/0, merge=0/0, ticks=1240219/0, in_queue=1240219, util=98.09% 00:25:31.554 nvme6n1: ios=15717/0, merge=0/0, ticks=1233042/0, in_queue=1233042, util=98.30% 00:25:31.554 nvme7n1: ios=15905/0, merge=0/0, ticks=1238270/0, in_queue=1238270, util=98.80% 00:25:31.554 nvme8n1: ios=13660/0, merge=0/0, ticks=1239589/0, in_queue=1239589, util=99.09% 00:25:31.554 nvme9n1: ios=15882/0, merge=0/0, ticks=1231287/0, in_queue=1231287, util=99.27% 00:25:31.554 00:50:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:31.554 [global] 00:25:31.554 thread=1 00:25:31.554 invalidate=1 00:25:31.554 rw=randwrite 00:25:31.554 time_based=1 00:25:31.554 runtime=10 00:25:31.554 ioengine=libaio 00:25:31.554 direct=1 00:25:31.554 bs=262144 00:25:31.554 iodepth=64 00:25:31.554 norandommap=1 00:25:31.554 numjobs=1 00:25:31.554 00:25:31.554 [job0] 00:25:31.554 filename=/dev/nvme0n1 00:25:31.554 [job1] 00:25:31.554 filename=/dev/nvme10n1 00:25:31.554 [job2] 00:25:31.554 filename=/dev/nvme1n1 00:25:31.554 [job3] 00:25:31.554 filename=/dev/nvme2n1 00:25:31.554 [job4] 00:25:31.554 filename=/dev/nvme3n1 00:25:31.554 [job5] 00:25:31.554 filename=/dev/nvme4n1 00:25:31.554 [job6] 00:25:31.554 filename=/dev/nvme5n1 00:25:31.554 [job7] 00:25:31.554 filename=/dev/nvme6n1 00:25:31.554 [job8] 00:25:31.554 filename=/dev/nvme7n1 00:25:31.554 [job9] 00:25:31.554 filename=/dev/nvme8n1 00:25:31.554 [job10] 00:25:31.554 filename=/dev/nvme9n1 00:25:31.554 Could not set queue depth (nvme0n1) 00:25:31.554 Could not set queue depth (nvme10n1) 00:25:31.554 Could not set queue depth (nvme1n1) 00:25:31.554 Could not set queue depth (nvme2n1) 00:25:31.554 Could not set queue depth (nvme3n1) 00:25:31.554 Could not set queue depth (nvme4n1) 00:25:31.554 Could not set queue depth (nvme5n1) 00:25:31.554 Could not set queue depth (nvme6n1) 00:25:31.554 Could not set queue depth (nvme7n1) 00:25:31.554 Could not set queue depth (nvme8n1) 00:25:31.554 Could not set queue depth (nvme9n1) 00:25:31.554 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:31.554 fio-3.35 00:25:31.554 Starting 11 threads 00:25:41.560 00:25:41.560 job0: (groupid=0, jobs=1): err= 0: pid=504880: Sat Jun 8 00:50:59 2024 00:25:41.560 write: IOPS=949, BW=237MiB/s (249MB/s)(2385MiB/10044msec); 0 zone resets 00:25:41.560 slat (usec): min=16, max=467512, avg=1020.38, stdev=6014.99 00:25:41.560 clat (msec): min=2, max=588, avg=66.34, stdev=50.32 00:25:41.560 lat (msec): min=2, max=588, avg=67.36, stdev=50.74 00:25:41.560 clat percentiles (msec): 00:25:41.560 | 1.00th=[ 9], 5.00th=[ 31], 10.00th=[ 39], 20.00th=[ 41], 00:25:41.560 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 61], 00:25:41.560 | 70.00th=[ 71], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 112], 00:25:41.560 | 99.00th=[ 232], 99.50th=[ 523], 99.90th=[ 584], 99.95th=[ 584], 00:25:41.560 | 99.99th=[ 592] 00:25:41.560 bw ( KiB/s): min=65024, max=384512, per=13.43%, avg=242581.95, stdev=89271.80, samples=20 00:25:41.560 iops : min= 254, max= 1502, avg=947.55, stdev=348.73, samples=20 00:25:41.560 lat (msec) : 4=0.24%, 10=0.93%, 20=1.85%, 50=41.33%, 100=37.45% 00:25:41.560 lat (msec) : 250=17.50%, 500=0.04%, 750=0.66% 00:25:41.560 cpu : usr=2.21%, sys=2.93%, ctx=2441, majf=0, minf=1 00:25:41.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:41.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.560 issued rwts: total=0,9538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.560 job1: (groupid=0, jobs=1): err= 0: pid=504899: Sat Jun 8 00:50:59 2024 00:25:41.560 write: IOPS=661, BW=165MiB/s (173MB/s)(1672MiB/10109msec); 0 zone resets 00:25:41.560 slat (usec): min=27, max=35144, avg=1413.81, stdev=2643.69 00:25:41.560 clat (msec): min=5, max=224, avg=95.27, stdev=21.13 00:25:41.560 lat (msec): min=5, max=224, avg=96.68, stdev=21.33 00:25:41.560 clat percentiles (msec): 00:25:41.560 | 1.00th=[ 23], 5.00th=[ 69], 10.00th=[ 78], 20.00th=[ 82], 00:25:41.560 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 95], 60.00th=[ 99], 00:25:41.560 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 122], 95.00th=[ 129], 00:25:41.560 | 99.00th=[ 148], 99.50th=[ 163], 99.90th=[ 207], 99.95th=[ 218], 00:25:41.560 | 99.99th=[ 226] 00:25:41.560 bw ( KiB/s): min=126976, max=230400, per=9.39%, avg=169644.50, stdev=26746.12, samples=20 00:25:41.560 iops : min= 496, max= 900, avg=662.65, stdev=104.46, samples=20 00:25:41.560 lat (msec) : 10=0.21%, 20=0.64%, 50=1.93%, 100=63.09%, 250=34.13% 00:25:41.560 cpu : usr=1.66%, sys=2.16%, ctx=2084, majf=0, minf=1 00:25:41.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:41.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.560 issued rwts: total=0,6689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.560 job2: (groupid=0, jobs=1): err= 0: pid=504918: Sat Jun 8 00:50:59 2024 00:25:41.560 write: IOPS=481, BW=120MiB/s (126MB/s)(1218MiB/10128msec); 0 zone resets 00:25:41.560 slat (usec): min=23, max=42925, avg=2011.21, stdev=3756.94 00:25:41.560 clat (msec): min=14, max=270, avg=130.98, stdev=19.79 00:25:41.560 lat (msec): min=14, max=270, avg=132.99, stdev=19.77 00:25:41.560 clat percentiles (msec): 00:25:41.560 | 1.00th=[ 50], 5.00th=[ 108], 10.00th=[ 114], 20.00th=[ 120], 00:25:41.560 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 136], 00:25:41.560 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 153], 95.00th=[ 157], 00:25:41.560 | 99.00th=[ 176], 99.50th=[ 211], 99.90th=[ 259], 99.95th=[ 259], 00:25:41.560 | 99.99th=[ 271] 00:25:41.560 bw ( KiB/s): min=100864, max=139776, per=6.82%, avg=123110.40, stdev=9835.16, samples=20 00:25:41.560 iops : min= 394, max= 546, avg=480.90, stdev=38.42, samples=20 00:25:41.560 lat (msec) : 20=0.16%, 50=0.88%, 100=1.99%, 250=96.84%, 500=0.12% 00:25:41.560 cpu : usr=1.21%, sys=1.53%, ctx=1374, majf=0, minf=1 00:25:41.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:41.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.560 issued rwts: total=0,4872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.560 job3: (groupid=0, jobs=1): err= 0: pid=504940: Sat Jun 8 00:50:59 2024 00:25:41.560 write: IOPS=630, BW=158MiB/s (165MB/s)(1588MiB/10069msec); 0 zone resets 00:25:41.560 slat (usec): min=24, max=29522, avg=1511.16, stdev=2880.69 00:25:41.560 clat (msec): min=6, max=169, avg=99.94, stdev=29.59 00:25:41.560 lat (msec): min=6, max=169, avg=101.45, stdev=29.96 00:25:41.560 clat percentiles (msec): 00:25:41.560 | 1.00th=[ 36], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 72], 00:25:41.560 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 100], 60.00th=[ 116], 00:25:41.560 | 70.00th=[ 122], 80.00th=[ 128], 90.00th=[ 140], 95.00th=[ 148], 00:25:41.560 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 167], 99.95th=[ 169], 00:25:41.560 | 99.99th=[ 169] 00:25:41.560 bw ( KiB/s): min=110592, max=226304, per=8.91%, avg=160947.20, stdev=43939.55, samples=20 00:25:41.560 iops : min= 432, max= 884, avg=628.70, stdev=171.64, samples=20 00:25:41.560 lat (msec) : 10=0.06%, 20=0.27%, 50=1.24%, 100=48.85%, 250=49.57% 00:25:41.560 cpu : usr=1.50%, sys=1.89%, ctx=1823, majf=0, minf=1 00:25:41.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:41.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.560 issued rwts: total=0,6350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.560 job4: (groupid=0, jobs=1): err= 0: pid=504947: Sat Jun 8 00:50:59 2024 00:25:41.560 write: IOPS=706, BW=177MiB/s (185MB/s)(1782MiB/10093msec); 0 zone resets 00:25:41.560 slat (usec): min=25, max=17782, avg=1376.99, stdev=2460.07 00:25:41.560 clat (msec): min=7, max=177, avg=89.24, stdev=18.44 00:25:41.560 lat (msec): min=7, max=183, avg=90.62, stdev=18.60 00:25:41.560 clat percentiles (msec): 00:25:41.560 | 1.00th=[ 50], 5.00th=[ 66], 10.00th=[ 70], 20.00th=[ 75], 00:25:41.560 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 92], 00:25:41.560 | 70.00th=[ 100], 80.00th=[ 106], 90.00th=[ 108], 95.00th=[ 122], 00:25:41.560 | 99.00th=[ 140], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 178], 00:25:41.560 | 99.99th=[ 178] 00:25:41.560 bw ( KiB/s): min=129024, max=231424, per=10.01%, avg=180812.80, stdev=27919.27, samples=20 00:25:41.560 iops : min= 504, max= 904, avg=706.30, stdev=109.06, samples=20 00:25:41.560 lat (msec) : 10=0.04%, 20=0.24%, 50=0.73%, 100=70.33%, 250=28.66% 00:25:41.560 cpu : usr=1.58%, sys=2.41%, ctx=1917, majf=0, minf=1 00:25:41.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:41.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.560 issued rwts: total=0,7126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.560 job5: (groupid=0, jobs=1): err= 0: pid=504969: Sat Jun 8 00:50:59 2024 00:25:41.560 write: IOPS=500, BW=125MiB/s (131MB/s)(1265MiB/10115msec); 0 zone resets 00:25:41.560 slat (usec): min=26, max=113496, avg=1903.18, stdev=4311.27 00:25:41.560 clat (msec): min=60, max=263, avg=125.85, stdev=22.81 00:25:41.560 lat (msec): min=60, max=263, avg=127.75, stdev=22.88 00:25:41.560 clat percentiles (msec): 00:25:41.560 | 1.00th=[ 85], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 111], 00:25:41.560 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 126], 00:25:41.560 | 70.00th=[ 131], 80.00th=[ 138], 90.00th=[ 150], 95.00th=[ 171], 00:25:41.560 | 99.00th=[ 211], 99.50th=[ 236], 99.90th=[ 264], 99.95th=[ 264], 00:25:41.560 | 99.99th=[ 264] 00:25:41.560 bw ( KiB/s): min=87552, max=157184, per=7.08%, avg=127948.80, stdev=17614.35, samples=20 00:25:41.560 iops : min= 342, max= 614, avg=499.80, stdev=68.81, samples=20 00:25:41.560 lat (msec) : 100=4.05%, 250=95.73%, 500=0.22% 00:25:41.560 cpu : usr=1.34%, sys=1.51%, ctx=1445, majf=0, minf=1 00:25:41.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:41.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.560 issued rwts: total=0,5061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.560 job6: (groupid=0, jobs=1): err= 0: pid=504980: Sat Jun 8 00:50:59 2024 00:25:41.560 write: IOPS=784, BW=196MiB/s (206MB/s)(1977MiB/10081msec); 0 zone resets 00:25:41.560 slat (usec): min=24, max=8028, avg=1244.91, stdev=2188.49 00:25:41.560 clat (msec): min=5, max=165, avg=80.34, stdev=17.35 00:25:41.560 lat (msec): min=5, max=165, avg=81.58, stdev=17.52 00:25:41.560 clat percentiles (msec): 00:25:41.560 | 1.00th=[ 51], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 66], 00:25:41.560 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 88], 00:25:41.560 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 109], 00:25:41.560 | 99.00th=[ 114], 99.50th=[ 120], 99.90th=[ 155], 99.95th=[ 161], 00:25:41.560 | 99.99th=[ 165] 00:25:41.560 bw ( KiB/s): min=150016, max=293376, per=11.12%, avg=200780.80, stdev=40412.77, samples=20 00:25:41.560 iops : min= 586, max= 1146, avg=784.30, stdev=157.86, samples=20 00:25:41.560 lat (msec) : 10=0.05%, 20=0.15%, 50=0.48%, 100=88.64%, 250=10.68% 00:25:41.560 cpu : usr=1.82%, sys=2.60%, ctx=2090, majf=0, minf=1 00:25:41.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:41.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.561 issued rwts: total=0,7906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.561 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.561 job7: (groupid=0, jobs=1): err= 0: pid=504989: Sat Jun 8 00:50:59 2024 00:25:41.561 write: IOPS=638, BW=160MiB/s (167MB/s)(1615MiB/10112msec); 0 zone resets 00:25:41.561 slat (usec): min=24, max=29568, avg=1378.62, stdev=2789.92 00:25:41.561 clat (msec): min=3, max=249, avg=98.74, stdev=31.70 00:25:41.561 lat (msec): min=4, max=249, avg=100.12, stdev=32.14 00:25:41.561 clat percentiles (msec): 00:25:41.561 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 59], 20.00th=[ 84], 00:25:41.561 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 105], 00:25:41.561 | 70.00th=[ 110], 80.00th=[ 125], 90.00th=[ 138], 95.00th=[ 148], 00:25:41.561 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 243], 99.95th=[ 243], 00:25:41.561 | 99.99th=[ 249] 00:25:41.561 bw ( KiB/s): min=110592, max=228864, per=9.07%, avg=163763.20, stdev=30412.73, samples=20 00:25:41.561 iops : min= 432, max= 894, avg=639.70, stdev=118.80, samples=20 00:25:41.561 lat (msec) : 4=0.02%, 10=0.43%, 20=1.84%, 50=6.24%, 100=47.88% 00:25:41.561 lat (msec) : 250=43.59% 00:25:41.561 cpu : usr=1.59%, sys=1.95%, ctx=2451, majf=0, minf=1 00:25:41.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:41.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.561 issued rwts: total=0,6460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.561 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.561 job8: (groupid=0, jobs=1): err= 0: pid=505006: Sat Jun 8 00:50:59 2024 00:25:41.561 write: IOPS=504, BW=126MiB/s (132MB/s)(1275MiB/10110msec); 0 zone resets 00:25:41.561 slat (usec): min=24, max=36428, avg=1876.54, stdev=3547.13 00:25:41.561 clat (msec): min=38, max=254, avg=124.93, stdev=18.30 00:25:41.561 lat (msec): min=38, max=255, avg=126.80, stdev=18.25 00:25:41.561 clat percentiles (msec): 00:25:41.561 | 1.00th=[ 92], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 110], 00:25:41.561 | 30.00th=[ 114], 40.00th=[ 121], 50.00th=[ 125], 60.00th=[ 129], 00:25:41.561 | 70.00th=[ 133], 80.00th=[ 140], 90.00th=[ 148], 95.00th=[ 153], 00:25:41.561 | 99.00th=[ 163], 99.50th=[ 211], 99.90th=[ 247], 99.95th=[ 247], 00:25:41.561 | 99.99th=[ 255] 00:25:41.561 bw ( KiB/s): min=104448, max=153600, per=7.14%, avg=128947.20, stdev=14015.08, samples=20 00:25:41.561 iops : min= 408, max= 600, avg=503.70, stdev=54.75, samples=20 00:25:41.561 lat (msec) : 50=0.16%, 100=4.41%, 250=95.39%, 500=0.04% 00:25:41.561 cpu : usr=1.38%, sys=1.52%, ctx=1499, majf=0, minf=1 00:25:41.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:41.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.561 issued rwts: total=0,5100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.561 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.561 job9: (groupid=0, jobs=1): err= 0: pid=505007: Sat Jun 8 00:50:59 2024 00:25:41.561 write: IOPS=688, BW=172MiB/s (181MB/s)(1739MiB/10096msec); 0 zone resets 00:25:41.561 slat (usec): min=27, max=42286, avg=1261.22, stdev=2682.88 00:25:41.561 clat (msec): min=3, max=197, avg=91.60, stdev=33.46 00:25:41.561 lat (msec): min=4, max=200, avg=92.86, stdev=33.93 00:25:41.561 clat percentiles (msec): 00:25:41.561 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 56], 20.00th=[ 67], 00:25:41.561 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 96], 60.00th=[ 104], 00:25:41.561 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 130], 95.00th=[ 148], 00:25:41.561 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 194], 99.95th=[ 197], 00:25:41.561 | 99.99th=[ 199] 00:25:41.561 bw ( KiB/s): min=101888, max=254464, per=9.77%, avg=176460.80, stdev=44915.92, samples=20 00:25:41.561 iops : min= 398, max= 994, avg=689.30, stdev=175.45, samples=20 00:25:41.561 lat (msec) : 4=0.04%, 10=0.72%, 20=1.87%, 50=6.01%, 100=45.59% 00:25:41.561 lat (msec) : 250=45.77% 00:25:41.561 cpu : usr=1.58%, sys=2.16%, ctx=2728, majf=0, minf=1 00:25:41.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:41.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.561 issued rwts: total=0,6956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.561 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.561 job10: (groupid=0, jobs=1): err= 0: pid=505008: Sat Jun 8 00:50:59 2024 00:25:41.561 write: IOPS=534, BW=134MiB/s (140MB/s)(1352MiB/10115msec); 0 zone resets 00:25:41.561 slat (usec): min=24, max=29265, avg=1818.51, stdev=3314.72 00:25:41.561 clat (msec): min=8, max=252, avg=117.86, stdev=24.16 00:25:41.561 lat (msec): min=8, max=252, avg=119.68, stdev=24.28 00:25:41.561 clat percentiles (msec): 00:25:41.561 | 1.00th=[ 25], 5.00th=[ 78], 10.00th=[ 89], 20.00th=[ 104], 00:25:41.561 | 30.00th=[ 109], 40.00th=[ 113], 50.00th=[ 118], 60.00th=[ 123], 00:25:41.561 | 70.00th=[ 131], 80.00th=[ 138], 90.00th=[ 146], 95.00th=[ 150], 00:25:41.561 | 99.00th=[ 165], 99.50th=[ 188], 99.90th=[ 230], 99.95th=[ 230], 00:25:41.561 | 99.99th=[ 253] 00:25:41.561 bw ( KiB/s): min=110592, max=191488, per=7.57%, avg=136806.40, stdev=21530.03, samples=20 00:25:41.561 iops : min= 432, max= 748, avg=534.40, stdev=84.10, samples=20 00:25:41.561 lat (msec) : 10=0.13%, 20=0.63%, 50=0.68%, 100=13.76%, 250=84.76% 00:25:41.561 lat (msec) : 500=0.04% 00:25:41.561 cpu : usr=1.32%, sys=1.64%, ctx=1500, majf=0, minf=1 00:25:41.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:41.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:41.561 issued rwts: total=0,5407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.561 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:41.561 00:25:41.561 Run status group 0 (all jobs): 00:25:41.561 WRITE: bw=1764MiB/s (1850MB/s), 120MiB/s-237MiB/s (126MB/s-249MB/s), io=17.4GiB (18.7GB), run=10044-10128msec 00:25:41.561 00:25:41.561 Disk stats (read/write): 00:25:41.561 nvme0n1: ios=52/18523, merge=0/0, ticks=4232/1062413, in_queue=1066645, util=99.60% 00:25:41.561 nvme10n1: ios=49/13353, merge=0/0, ticks=86/1228509, in_queue=1228595, util=97.16% 00:25:41.561 nvme1n1: ios=47/9686, merge=0/0, ticks=1103/1223581, in_queue=1224684, util=99.89% 00:25:41.561 nvme2n1: ios=42/12306, merge=0/0, ticks=79/1200243, in_queue=1200322, util=97.60% 00:25:41.561 nvme3n1: ios=36/13896, merge=0/0, ticks=83/1198345, in_queue=1198428, util=97.65% 00:25:41.561 nvme4n1: ios=46/10094, merge=0/0, ticks=3808/1216133, in_queue=1219941, util=99.97% 00:25:41.561 nvme5n1: ios=0/15452, merge=0/0, ticks=0/1197671, in_queue=1197671, util=97.94% 00:25:41.561 nvme6n1: ios=39/12898, merge=0/0, ticks=594/1232268, in_queue=1232862, util=100.00% 00:25:41.561 nvme7n1: ios=41/10180, merge=0/0, ticks=1044/1226871, in_queue=1227915, util=100.00% 00:25:41.561 nvme8n1: ios=0/13912, merge=0/0, ticks=0/1234600, in_queue=1234600, util=98.94% 00:25:41.561 nvme9n1: ios=0/10788, merge=0/0, ticks=0/1226645, in_queue=1226645, util=99.12% 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:41.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK1 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK1 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.561 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:41.823 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK2 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK2 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.823 00:50:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:42.083 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK3 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK3 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.083 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:42.654 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK4 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK4 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:42.654 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK5 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:42.654 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK5 00:25:42.915 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:42.915 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:42.915 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.915 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.915 00:51:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.915 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.915 00:51:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:42.915 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:42.915 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:42.915 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:42.915 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:42.915 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK6 00:25:42.915 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:42.915 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK6 00:25:42.915 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:42.915 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:42.915 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.915 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:43.176 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK7 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK7 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.176 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:43.436 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK8 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK8 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.436 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:43.436 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK9 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK9 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.696 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:43.697 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK10 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK10 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.697 00:51:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:43.957 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK11 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK11 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.957 rmmod nvme_tcp 00:25:43.957 rmmod nvme_fabrics 00:25:43.957 rmmod nvme_keyring 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 494052 ']' 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 494052 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@949 -- # '[' -z 494052 ']' 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # kill -0 494052 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # uname 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 494052 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # echo 'killing process with pid 494052' 00:25:43.957 killing process with pid 494052 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@968 -- # kill 494052 00:25:43.957 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@973 -- # wait 494052 00:25:44.218 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:44.218 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:44.218 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:44.218 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.218 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:44.218 00:51:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.218 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.218 00:51:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.762 00:51:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:46.762 00:25:46.762 real 1m17.489s 00:25:46.762 user 4m52.058s 00:25:46.762 sys 0m23.428s 00:25:46.762 00:51:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:46.762 00:51:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:46.762 ************************************ 00:25:46.762 END TEST nvmf_multiconnection 00:25:46.762 ************************************ 00:25:46.762 00:51:04 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:46.762 00:51:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:46.762 00:51:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:46.762 00:51:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.762 ************************************ 00:25:46.762 START TEST nvmf_initiator_timeout 00:25:46.762 ************************************ 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:46.762 * Looking for test storage... 00:25:46.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:46.762 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.763 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.763 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.763 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:46.763 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:46.763 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:46.763 00:51:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:53.353 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:53.353 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:53.353 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.353 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:53.353 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.354 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.615 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.615 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.615 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:53.615 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.615 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.615 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.615 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:53.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:25:53.615 00:25:53.615 --- 10.0.0.2 ping statistics --- 00:25:53.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.615 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:25:53.615 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:25:53.615 00:25:53.615 --- 10.0.0.1 ping statistics --- 00:25:53.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.615 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:25:53.615 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.615 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=512181 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 512181 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@830 -- # '[' -z 512181 ']' 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:53.616 00:51:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.876 [2024-06-08 00:51:11.921574] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:25:53.876 [2024-06-08 00:51:11.921647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.876 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.876 [2024-06-08 00:51:11.993646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.876 [2024-06-08 00:51:12.068465] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.876 [2024-06-08 00:51:12.068499] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.876 [2024-06-08 00:51:12.068508] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.876 [2024-06-08 00:51:12.068515] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.876 [2024-06-08 00:51:12.068521] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.876 [2024-06-08 00:51:12.068660] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.876 [2024-06-08 00:51:12.068778] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.876 [2024-06-08 00:51:12.068937] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.876 [2024-06-08 00:51:12.068938] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.449 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:54.449 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@863 -- # return 0 00:25:54.449 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:54.449 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:54.449 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 Malloc0 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 Delay0 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 [2024-06-08 00:51:12.781131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 [2024-06-08 00:51:12.821377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.710 00:51:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:56.096 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:56.096 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local i=0 00:25:56.096 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.096 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:25:56.096 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # sleep 2 00:25:58.640 00:51:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:25:58.640 00:51:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:58.640 00:51:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:25:58.640 00:51:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:25:58.640 00:51:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.640 00:51:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # return 0 00:25:58.640 00:51:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=512890 00:25:58.640 00:51:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:58.640 00:51:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:58.640 [global] 00:25:58.640 thread=1 00:25:58.640 invalidate=1 00:25:58.640 rw=write 00:25:58.640 time_based=1 00:25:58.640 runtime=60 00:25:58.640 ioengine=libaio 00:25:58.640 direct=1 00:25:58.640 bs=4096 00:25:58.640 iodepth=1 00:25:58.640 norandommap=0 00:25:58.640 numjobs=1 00:25:58.640 00:25:58.640 verify_dump=1 00:25:58.640 verify_backlog=512 00:25:58.640 verify_state_save=0 00:25:58.640 do_verify=1 00:25:58.640 verify=crc32c-intel 00:25:58.640 [job0] 00:25:58.640 filename=/dev/nvme0n1 00:25:58.640 Could not set queue depth (nvme0n1) 00:25:58.640 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:58.640 fio-3.35 00:25:58.640 Starting 1 thread 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.212 true 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.212 true 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.212 true 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.212 true 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.212 00:51:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.514 true 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.514 true 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.514 true 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.514 true 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:04.514 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 512890 00:27:00.783 00:27:00.783 job0: (groupid=0, jobs=1): err= 0: pid=513232: Sat Jun 8 00:52:16 2024 00:27:00.783 read: IOPS=47, BW=190KiB/s (194kB/s)(11.1MiB/60023msec) 00:27:00.783 slat (usec): min=6, max=9065, avg=29.73, stdev=229.90 00:27:00.783 clat (usec): min=287, max=41866k, avg=20339.94, stdev=784786.09 00:27:00.783 lat (usec): min=294, max=41866k, avg=20369.67, stdev=784786.01 00:27:00.783 clat percentiles (usec): 00:27:00.783 | 1.00th=[ 578], 5.00th=[ 652], 10.00th=[ 701], 00:27:00.783 | 20.00th=[ 750], 30.00th=[ 791], 40.00th=[ 816], 00:27:00.783 | 50.00th=[ 832], 60.00th=[ 857], 70.00th=[ 881], 00:27:00.783 | 80.00th=[ 938], 90.00th=[ 41681], 95.00th=[ 42206], 00:27:00.783 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42730], 00:27:00.783 | 99.95th=[ 45876], 99.99th=[17112761] 00:27:00.783 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60023msec); 0 zone resets 00:27:00.783 slat (usec): min=9, max=32443, avg=38.46, stdev=584.93 00:27:00.783 clat (usec): min=211, max=1078, avg=614.33, stdev=121.40 00:27:00.783 lat (usec): min=222, max=33141, avg=652.79, stdev=599.67 00:27:00.783 clat percentiles (usec): 00:27:00.783 | 1.00th=[ 334], 5.00th=[ 388], 10.00th=[ 453], 20.00th=[ 515], 00:27:00.783 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 644], 00:27:00.783 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 824], 00:27:00.783 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 996], 00:27:00.783 | 99.99th=[ 1074] 00:27:00.783 bw ( KiB/s): min= 1096, max= 4096, per=100.00%, avg=3072.00, stdev=1236.89, samples=8 00:27:00.783 iops : min= 274, max= 1024, avg=768.00, stdev=309.22, samples=8 00:27:00.783 lat (usec) : 250=0.08%, 500=9.28%, 750=45.57%, 1000=37.07% 00:27:00.783 lat (msec) : 2=2.35%, 4=0.02%, 50=5.61%, >=2000=0.02% 00:27:00.783 cpu : usr=0.16%, sys=0.25%, ctx=5927, majf=0, minf=1 00:27:00.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:00.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.783 issued rwts: total=2846,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:00.783 00:27:00.783 Run status group 0 (all jobs): 00:27:00.783 READ: bw=190KiB/s (194kB/s), 190KiB/s-190KiB/s (194kB/s-194kB/s), io=11.1MiB (11.7MB), run=60023-60023msec 00:27:00.783 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60023-60023msec 00:27:00.783 00:27:00.783 Disk stats (read/write): 00:27:00.783 nvme0n1: ios=2895/3072, merge=0/0, ticks=17158/1815, in_queue=18973, util=100.00% 00:27:00.783 00:52:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:00.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # local i=0 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1230 -- # return 0 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:00.783 nvmf hotplug test: fio successful as expected 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:00.783 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.784 rmmod nvme_tcp 00:27:00.784 rmmod nvme_fabrics 00:27:00.784 rmmod nvme_keyring 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 512181 ']' 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 512181 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@949 -- # '[' -z 512181 ']' 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # kill -0 512181 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # uname 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 512181 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 512181' 00:27:00.784 killing process with pid 512181 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # kill 512181 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # wait 512181 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.784 00:52:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.355 00:52:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.355 00:27:01.355 real 1m14.774s 00:27:01.355 user 4m33.867s 00:27:01.355 sys 0m7.335s 00:27:01.355 00:52:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:01.355 00:52:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.355 ************************************ 00:27:01.355 END TEST nvmf_initiator_timeout 00:27:01.355 ************************************ 00:27:01.355 00:52:19 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:01.355 00:52:19 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:01.355 00:52:19 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:01.355 00:52:19 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.355 00:52:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:07.945 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:07.945 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:07.945 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:07.945 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:07.945 00:52:26 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:07.945 00:52:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:07.945 00:52:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:07.945 00:52:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.945 ************************************ 00:27:07.945 START TEST nvmf_perf_adq 00:27:07.945 ************************************ 00:27:07.945 00:52:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:07.945 * Looking for test storage... 00:27:07.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:07.945 00:52:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.945 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:07.945 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.945 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.945 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.945 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.945 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.946 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.946 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.946 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.946 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:08.208 00:52:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:14.798 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:14.798 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:14.798 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:14.798 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:14.798 00:52:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:16.710 00:52:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:18.680 00:52:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:23.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:23.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:23.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:23.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.971 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:23.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:27:23.972 00:27:23.972 --- 10.0.0.2 ping statistics --- 00:27:23.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.972 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:27:23.972 00:27:23.972 --- 10.0.0.1 ping statistics --- 00:27:23.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.972 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=534043 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 534043 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 534043 ']' 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:23.972 00:52:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:23.972 [2024-06-08 00:52:41.947504] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:23.972 [2024-06-08 00:52:41.947565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.972 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.972 [2024-06-08 00:52:42.018442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:23.972 [2024-06-08 00:52:42.094410] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.972 [2024-06-08 00:52:42.094445] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.972 [2024-06-08 00:52:42.094453] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.972 [2024-06-08 00:52:42.094459] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.972 [2024-06-08 00:52:42.094465] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.972 [2024-06-08 00:52:42.094655] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.972 [2024-06-08 00:52:42.094810] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:23.972 [2024-06-08 00:52:42.094970] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.972 [2024-06-08 00:52:42.094972] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.545 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 [2024-06-08 00:52:42.900317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 Malloc1 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 [2024-06-08 00:52:42.957184] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=534393 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:24.806 00:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:24.806 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.753 00:52:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:26.753 00:52:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.753 00:52:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.753 00:52:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.753 00:52:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:26.753 "tick_rate": 2400000000, 00:27:26.753 "poll_groups": [ 00:27:26.753 { 00:27:26.753 "name": "nvmf_tgt_poll_group_000", 00:27:26.753 "admin_qpairs": 1, 00:27:26.753 "io_qpairs": 1, 00:27:26.753 "current_admin_qpairs": 1, 00:27:26.753 "current_io_qpairs": 1, 00:27:26.753 "pending_bdev_io": 0, 00:27:26.753 "completed_nvme_io": 19897, 00:27:26.753 "transports": [ 00:27:26.753 { 00:27:26.753 "trtype": "TCP" 00:27:26.753 } 00:27:26.753 ] 00:27:26.753 }, 00:27:26.753 { 00:27:26.753 "name": "nvmf_tgt_poll_group_001", 00:27:26.753 "admin_qpairs": 0, 00:27:26.753 "io_qpairs": 1, 00:27:26.753 "current_admin_qpairs": 0, 00:27:26.753 "current_io_qpairs": 1, 00:27:26.753 "pending_bdev_io": 0, 00:27:26.753 "completed_nvme_io": 28734, 00:27:26.753 "transports": [ 00:27:26.753 { 00:27:26.753 "trtype": "TCP" 00:27:26.753 } 00:27:26.753 ] 00:27:26.753 }, 00:27:26.753 { 00:27:26.753 "name": "nvmf_tgt_poll_group_002", 00:27:26.753 "admin_qpairs": 0, 00:27:26.753 "io_qpairs": 1, 00:27:26.753 "current_admin_qpairs": 0, 00:27:26.753 "current_io_qpairs": 1, 00:27:26.753 "pending_bdev_io": 0, 00:27:26.753 "completed_nvme_io": 22567, 00:27:26.753 "transports": [ 00:27:26.753 { 00:27:26.753 "trtype": "TCP" 00:27:26.753 } 00:27:26.753 ] 00:27:26.753 }, 00:27:26.753 { 00:27:26.753 "name": "nvmf_tgt_poll_group_003", 00:27:26.753 "admin_qpairs": 0, 00:27:26.753 "io_qpairs": 1, 00:27:26.753 "current_admin_qpairs": 0, 00:27:26.753 "current_io_qpairs": 1, 00:27:26.753 "pending_bdev_io": 0, 00:27:26.753 "completed_nvme_io": 20399, 00:27:26.753 "transports": [ 00:27:26.753 { 00:27:26.753 "trtype": "TCP" 00:27:26.753 } 00:27:26.753 ] 00:27:26.753 } 00:27:26.753 ] 00:27:26.753 }' 00:27:26.753 00:52:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:26.753 00:52:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:27.015 00:52:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:27.015 00:52:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:27.015 00:52:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 534393 00:27:35.153 Initializing NVMe Controllers 00:27:35.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:35.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:35.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:35.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:35.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:35.154 Initialization complete. Launching workers. 00:27:35.154 ======================================================== 00:27:35.154 Latency(us) 00:27:35.154 Device Information : IOPS MiB/s Average min max 00:27:35.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11465.67 44.79 5582.51 1197.60 9429.30 00:27:35.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15034.36 58.73 4256.40 1426.45 8409.76 00:27:35.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14362.46 56.10 4455.95 1336.27 10308.92 00:27:35.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13693.16 53.49 4673.41 1279.50 11352.29 00:27:35.154 ======================================================== 00:27:35.154 Total : 54555.65 213.11 4692.30 1197.60 11352.29 00:27:35.154 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:35.154 rmmod nvme_tcp 00:27:35.154 rmmod nvme_fabrics 00:27:35.154 rmmod nvme_keyring 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 534043 ']' 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 534043 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 534043 ']' 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 534043 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 534043 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 534043' 00:27:35.154 killing process with pid 534043 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 534043 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 534043 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.154 00:52:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.700 00:52:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:37.700 00:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:37.700 00:52:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:39.085 00:52:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:40.999 00:52:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:46.288 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:46.288 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.288 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:46.289 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:46.289 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.289 00:53:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:27:46.289 00:27:46.289 --- 10.0.0.2 ping statistics --- 00:27:46.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.289 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:27:46.289 00:27:46.289 --- 10.0.0.1 ping statistics --- 00:27:46.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.289 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:46.289 net.core.busy_poll = 1 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:46.289 net.core.busy_read = 1 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=538850 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 538850 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 538850 ']' 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:46.289 00:53:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:46.550 [2024-06-08 00:53:04.613194] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:46.550 [2024-06-08 00:53:04.613243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.550 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.550 [2024-06-08 00:53:04.677485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.550 [2024-06-08 00:53:04.742870] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.550 [2024-06-08 00:53:04.742905] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.550 [2024-06-08 00:53:04.742913] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.550 [2024-06-08 00:53:04.742920] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.550 [2024-06-08 00:53:04.742925] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.550 [2024-06-08 00:53:04.743068] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.550 [2024-06-08 00:53:04.743182] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.550 [2024-06-08 00:53:04.743328] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.550 [2024-06-08 00:53:04.743330] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.122 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:47.122 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:27:47.122 00:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.122 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:47.122 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.383 [2024-06-08 00:53:05.563662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.383 Malloc1 00:27:47.383 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.384 [2024-06-08 00:53:05.623090] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=539125 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:47.384 00:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:47.384 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.955 00:53:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:49.955 00:53:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.955 00:53:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.955 00:53:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.955 00:53:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:49.955 "tick_rate": 2400000000, 00:27:49.955 "poll_groups": [ 00:27:49.955 { 00:27:49.955 "name": "nvmf_tgt_poll_group_000", 00:27:49.955 "admin_qpairs": 1, 00:27:49.955 "io_qpairs": 2, 00:27:49.955 "current_admin_qpairs": 1, 00:27:49.955 "current_io_qpairs": 2, 00:27:49.955 "pending_bdev_io": 0, 00:27:49.955 "completed_nvme_io": 29916, 00:27:49.955 "transports": [ 00:27:49.955 { 00:27:49.955 "trtype": "TCP" 00:27:49.955 } 00:27:49.955 ] 00:27:49.955 }, 00:27:49.955 { 00:27:49.955 "name": "nvmf_tgt_poll_group_001", 00:27:49.955 "admin_qpairs": 0, 00:27:49.955 "io_qpairs": 2, 00:27:49.955 "current_admin_qpairs": 0, 00:27:49.955 "current_io_qpairs": 2, 00:27:49.955 "pending_bdev_io": 0, 00:27:49.955 "completed_nvme_io": 39153, 00:27:49.955 "transports": [ 00:27:49.955 { 00:27:49.955 "trtype": "TCP" 00:27:49.955 } 00:27:49.955 ] 00:27:49.955 }, 00:27:49.955 { 00:27:49.955 "name": "nvmf_tgt_poll_group_002", 00:27:49.955 "admin_qpairs": 0, 00:27:49.955 "io_qpairs": 0, 00:27:49.955 "current_admin_qpairs": 0, 00:27:49.955 "current_io_qpairs": 0, 00:27:49.955 "pending_bdev_io": 0, 00:27:49.955 "completed_nvme_io": 0, 00:27:49.955 "transports": [ 00:27:49.955 { 00:27:49.955 "trtype": "TCP" 00:27:49.955 } 00:27:49.955 ] 00:27:49.955 }, 00:27:49.955 { 00:27:49.955 "name": "nvmf_tgt_poll_group_003", 00:27:49.955 "admin_qpairs": 0, 00:27:49.955 "io_qpairs": 0, 00:27:49.955 "current_admin_qpairs": 0, 00:27:49.955 "current_io_qpairs": 0, 00:27:49.955 "pending_bdev_io": 0, 00:27:49.955 "completed_nvme_io": 0, 00:27:49.955 "transports": [ 00:27:49.955 { 00:27:49.955 "trtype": "TCP" 00:27:49.955 } 00:27:49.955 ] 00:27:49.955 } 00:27:49.955 ] 00:27:49.955 }' 00:27:49.955 00:53:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:49.955 00:53:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:49.955 00:53:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:49.955 00:53:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:49.955 00:53:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 539125 00:27:58.092 Initializing NVMe Controllers 00:27:58.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:58.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:58.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:58.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:58.092 Initialization complete. Launching workers. 00:27:58.092 ======================================================== 00:27:58.092 Latency(us) 00:27:58.092 Device Information : IOPS MiB/s Average min max 00:27:58.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11678.50 45.62 5496.92 1081.89 50542.74 00:27:58.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8013.10 31.30 7986.28 1337.32 50467.52 00:27:58.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7630.90 29.81 8412.31 1224.14 52979.67 00:27:58.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11771.30 45.98 5449.71 1105.38 49722.69 00:27:58.092 ======================================================== 00:27:58.092 Total : 39093.79 152.71 6562.02 1081.89 52979.67 00:27:58.092 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:58.092 rmmod nvme_tcp 00:27:58.092 rmmod nvme_fabrics 00:27:58.092 rmmod nvme_keyring 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 538850 ']' 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 538850 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 538850 ']' 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 538850 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 538850 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 538850' 00:27:58.092 killing process with pid 538850 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 538850 00:27:58.092 00:53:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 538850 00:27:58.092 00:53:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:58.092 00:53:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:58.092 00:53:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:58.092 00:53:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.092 00:53:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.092 00:53:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.092 00:53:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.092 00:53:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.393 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:01.393 00:53:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:01.393 00:28:01.393 real 0m53.012s 00:28:01.393 user 2m46.456s 00:28:01.393 sys 0m11.754s 00:28:01.393 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:01.393 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.393 ************************************ 00:28:01.393 END TEST nvmf_perf_adq 00:28:01.393 ************************************ 00:28:01.393 00:53:19 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:01.393 00:53:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:01.393 00:53:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:01.393 00:53:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:01.393 ************************************ 00:28:01.393 START TEST nvmf_shutdown 00:28:01.393 ************************************ 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:01.393 * Looking for test storage... 00:28:01.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:01.393 ************************************ 00:28:01.393 START TEST nvmf_shutdown_tc1 00:28:01.393 ************************************ 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:01.393 00:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:07.983 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.983 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:07.983 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:07.983 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:07.983 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:07.983 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:07.983 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:07.983 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:07.983 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:07.983 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:07.984 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:07.984 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:07.984 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:07.984 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.984 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.245 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:08.245 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.245 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.245 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.245 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:08.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:28:08.245 00:28:08.245 --- 10.0.0.2 ping statistics --- 00:28:08.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.245 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:28:08.245 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:28:08.246 00:28:08.246 --- 10.0.0.1 ping statistics --- 00:28:08.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.246 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=545344 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 545344 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 545344 ']' 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.246 00:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:08.507 [2024-06-08 00:53:26.532736] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:08.507 [2024-06-08 00:53:26.532824] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.507 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.507 [2024-06-08 00:53:26.624647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:08.507 [2024-06-08 00:53:26.720473] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.507 [2024-06-08 00:53:26.720529] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.507 [2024-06-08 00:53:26.720537] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.507 [2024-06-08 00:53:26.720544] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.507 [2024-06-08 00:53:26.720550] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.507 [2024-06-08 00:53:26.720680] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.507 [2024-06-08 00:53:26.720848] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.507 [2024-06-08 00:53:26.721006] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.507 [2024-06-08 00:53:26.721007] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 [2024-06-08 00:53:27.334741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.080 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.341 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.341 Malloc1 00:28:09.341 [2024-06-08 00:53:27.435501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.341 Malloc2 00:28:09.341 Malloc3 00:28:09.341 Malloc4 00:28:09.341 Malloc5 00:28:09.341 Malloc6 00:28:09.602 Malloc7 00:28:09.602 Malloc8 00:28:09.602 Malloc9 00:28:09.602 Malloc10 00:28:09.602 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.602 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:09.602 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:09.602 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.602 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=545720 00:28:09.602 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 545720 /var/tmp/bdevperf.sock 00:28:09.602 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 545720 ']' 00:28:09.602 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:09.602 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:09.602 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:09.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.603 { 00:28:09.603 "params": { 00:28:09.603 "name": "Nvme$subsystem", 00:28:09.603 "trtype": "$TEST_TRANSPORT", 00:28:09.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.603 "adrfam": "ipv4", 00:28:09.603 "trsvcid": "$NVMF_PORT", 00:28:09.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.603 "hdgst": ${hdgst:-false}, 00:28:09.603 "ddgst": ${ddgst:-false} 00:28:09.603 }, 00:28:09.603 "method": "bdev_nvme_attach_controller" 00:28:09.603 } 00:28:09.603 EOF 00:28:09.603 )") 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.603 { 00:28:09.603 "params": { 00:28:09.603 "name": "Nvme$subsystem", 00:28:09.603 "trtype": "$TEST_TRANSPORT", 00:28:09.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.603 "adrfam": "ipv4", 00:28:09.603 "trsvcid": "$NVMF_PORT", 00:28:09.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.603 "hdgst": ${hdgst:-false}, 00:28:09.603 "ddgst": ${ddgst:-false} 00:28:09.603 }, 00:28:09.603 "method": "bdev_nvme_attach_controller" 00:28:09.603 } 00:28:09.603 EOF 00:28:09.603 )") 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.603 { 00:28:09.603 "params": { 00:28:09.603 "name": "Nvme$subsystem", 00:28:09.603 "trtype": "$TEST_TRANSPORT", 00:28:09.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.603 "adrfam": "ipv4", 00:28:09.603 "trsvcid": "$NVMF_PORT", 00:28:09.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.603 "hdgst": ${hdgst:-false}, 00:28:09.603 "ddgst": ${ddgst:-false} 00:28:09.603 }, 00:28:09.603 "method": "bdev_nvme_attach_controller" 00:28:09.603 } 00:28:09.603 EOF 00:28:09.603 )") 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.603 { 00:28:09.603 "params": { 00:28:09.603 "name": "Nvme$subsystem", 00:28:09.603 "trtype": "$TEST_TRANSPORT", 00:28:09.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.603 "adrfam": "ipv4", 00:28:09.603 "trsvcid": "$NVMF_PORT", 00:28:09.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.603 "hdgst": ${hdgst:-false}, 00:28:09.603 "ddgst": ${ddgst:-false} 00:28:09.603 }, 00:28:09.603 "method": "bdev_nvme_attach_controller" 00:28:09.603 } 00:28:09.603 EOF 00:28:09.603 )") 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.603 { 00:28:09.603 "params": { 00:28:09.603 "name": "Nvme$subsystem", 00:28:09.603 "trtype": "$TEST_TRANSPORT", 00:28:09.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.603 "adrfam": "ipv4", 00:28:09.603 "trsvcid": "$NVMF_PORT", 00:28:09.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.603 "hdgst": ${hdgst:-false}, 00:28:09.603 "ddgst": ${ddgst:-false} 00:28:09.603 }, 00:28:09.603 "method": "bdev_nvme_attach_controller" 00:28:09.603 } 00:28:09.603 EOF 00:28:09.603 )") 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.603 { 00:28:09.603 "params": { 00:28:09.603 "name": "Nvme$subsystem", 00:28:09.603 "trtype": "$TEST_TRANSPORT", 00:28:09.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.603 "adrfam": "ipv4", 00:28:09.603 "trsvcid": "$NVMF_PORT", 00:28:09.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.603 "hdgst": ${hdgst:-false}, 00:28:09.603 "ddgst": ${ddgst:-false} 00:28:09.603 }, 00:28:09.603 "method": "bdev_nvme_attach_controller" 00:28:09.603 } 00:28:09.603 EOF 00:28:09.603 )") 00:28:09.603 [2024-06-08 00:53:27.877826] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:09.603 [2024-06-08 00:53:27.877879] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.603 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.603 { 00:28:09.603 "params": { 00:28:09.603 "name": "Nvme$subsystem", 00:28:09.603 "trtype": "$TEST_TRANSPORT", 00:28:09.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.603 "adrfam": "ipv4", 00:28:09.603 "trsvcid": "$NVMF_PORT", 00:28:09.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.603 "hdgst": ${hdgst:-false}, 00:28:09.603 "ddgst": ${ddgst:-false} 00:28:09.603 }, 00:28:09.603 "method": "bdev_nvme_attach_controller" 00:28:09.603 } 00:28:09.603 EOF 00:28:09.603 )") 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.864 { 00:28:09.864 "params": { 00:28:09.864 "name": "Nvme$subsystem", 00:28:09.864 "trtype": "$TEST_TRANSPORT", 00:28:09.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.864 "adrfam": "ipv4", 00:28:09.864 "trsvcid": "$NVMF_PORT", 00:28:09.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.864 "hdgst": ${hdgst:-false}, 00:28:09.864 "ddgst": ${ddgst:-false} 00:28:09.864 }, 00:28:09.864 "method": "bdev_nvme_attach_controller" 00:28:09.864 } 00:28:09.864 EOF 00:28:09.864 )") 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.864 { 00:28:09.864 "params": { 00:28:09.864 "name": "Nvme$subsystem", 00:28:09.864 "trtype": "$TEST_TRANSPORT", 00:28:09.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.864 "adrfam": "ipv4", 00:28:09.864 "trsvcid": "$NVMF_PORT", 00:28:09.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.864 "hdgst": ${hdgst:-false}, 00:28:09.864 "ddgst": ${ddgst:-false} 00:28:09.864 }, 00:28:09.864 "method": "bdev_nvme_attach_controller" 00:28:09.864 } 00:28:09.864 EOF 00:28:09.864 )") 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.864 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:09.864 { 00:28:09.864 "params": { 00:28:09.864 "name": "Nvme$subsystem", 00:28:09.864 "trtype": "$TEST_TRANSPORT", 00:28:09.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.864 "adrfam": "ipv4", 00:28:09.864 "trsvcid": "$NVMF_PORT", 00:28:09.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.864 "hdgst": ${hdgst:-false}, 00:28:09.864 "ddgst": ${ddgst:-false} 00:28:09.864 }, 00:28:09.864 "method": "bdev_nvme_attach_controller" 00:28:09.864 } 00:28:09.864 EOF 00:28:09.864 )") 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:09.864 00:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:09.864 "params": { 00:28:09.864 "name": "Nvme1", 00:28:09.864 "trtype": "tcp", 00:28:09.864 "traddr": "10.0.0.2", 00:28:09.864 "adrfam": "ipv4", 00:28:09.864 "trsvcid": "4420", 00:28:09.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:09.864 "hdgst": false, 00:28:09.864 "ddgst": false 00:28:09.864 }, 00:28:09.864 "method": "bdev_nvme_attach_controller" 00:28:09.864 },{ 00:28:09.864 "params": { 00:28:09.864 "name": "Nvme2", 00:28:09.864 "trtype": "tcp", 00:28:09.864 "traddr": "10.0.0.2", 00:28:09.864 "adrfam": "ipv4", 00:28:09.864 "trsvcid": "4420", 00:28:09.864 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:09.864 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:09.864 "hdgst": false, 00:28:09.864 "ddgst": false 00:28:09.864 }, 00:28:09.864 "method": "bdev_nvme_attach_controller" 00:28:09.864 },{ 00:28:09.864 "params": { 00:28:09.864 "name": "Nvme3", 00:28:09.864 "trtype": "tcp", 00:28:09.864 "traddr": "10.0.0.2", 00:28:09.864 "adrfam": "ipv4", 00:28:09.864 "trsvcid": "4420", 00:28:09.864 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:09.864 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:09.864 "hdgst": false, 00:28:09.864 "ddgst": false 00:28:09.864 }, 00:28:09.864 "method": "bdev_nvme_attach_controller" 00:28:09.864 },{ 00:28:09.864 "params": { 00:28:09.864 "name": "Nvme4", 00:28:09.864 "trtype": "tcp", 00:28:09.864 "traddr": "10.0.0.2", 00:28:09.864 "adrfam": "ipv4", 00:28:09.864 "trsvcid": "4420", 00:28:09.864 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:09.864 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:09.864 "hdgst": false, 00:28:09.864 "ddgst": false 00:28:09.864 }, 00:28:09.864 "method": "bdev_nvme_attach_controller" 00:28:09.864 },{ 00:28:09.864 "params": { 00:28:09.864 "name": "Nvme5", 00:28:09.864 "trtype": "tcp", 00:28:09.864 "traddr": "10.0.0.2", 00:28:09.864 "adrfam": "ipv4", 00:28:09.864 "trsvcid": "4420", 00:28:09.864 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:09.864 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:09.864 "hdgst": false, 00:28:09.864 "ddgst": false 00:28:09.864 }, 00:28:09.864 "method": "bdev_nvme_attach_controller" 00:28:09.864 },{ 00:28:09.864 "params": { 00:28:09.864 "name": "Nvme6", 00:28:09.864 "trtype": "tcp", 00:28:09.865 "traddr": "10.0.0.2", 00:28:09.865 "adrfam": "ipv4", 00:28:09.865 "trsvcid": "4420", 00:28:09.865 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:09.865 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:09.865 "hdgst": false, 00:28:09.865 "ddgst": false 00:28:09.865 }, 00:28:09.865 "method": "bdev_nvme_attach_controller" 00:28:09.865 },{ 00:28:09.865 "params": { 00:28:09.865 "name": "Nvme7", 00:28:09.865 "trtype": "tcp", 00:28:09.865 "traddr": "10.0.0.2", 00:28:09.865 "adrfam": "ipv4", 00:28:09.865 "trsvcid": "4420", 00:28:09.865 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:09.865 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:09.865 "hdgst": false, 00:28:09.865 "ddgst": false 00:28:09.865 }, 00:28:09.865 "method": "bdev_nvme_attach_controller" 00:28:09.865 },{ 00:28:09.865 "params": { 00:28:09.865 "name": "Nvme8", 00:28:09.865 "trtype": "tcp", 00:28:09.865 "traddr": "10.0.0.2", 00:28:09.865 "adrfam": "ipv4", 00:28:09.865 "trsvcid": "4420", 00:28:09.865 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:09.865 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:09.865 "hdgst": false, 00:28:09.865 "ddgst": false 00:28:09.865 }, 00:28:09.865 "method": "bdev_nvme_attach_controller" 00:28:09.865 },{ 00:28:09.865 "params": { 00:28:09.865 "name": "Nvme9", 00:28:09.865 "trtype": "tcp", 00:28:09.865 "traddr": "10.0.0.2", 00:28:09.865 "adrfam": "ipv4", 00:28:09.865 "trsvcid": "4420", 00:28:09.865 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:09.865 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:09.865 "hdgst": false, 00:28:09.865 "ddgst": false 00:28:09.865 }, 00:28:09.865 "method": "bdev_nvme_attach_controller" 00:28:09.865 },{ 00:28:09.865 "params": { 00:28:09.865 "name": "Nvme10", 00:28:09.865 "trtype": "tcp", 00:28:09.865 "traddr": "10.0.0.2", 00:28:09.865 "adrfam": "ipv4", 00:28:09.865 "trsvcid": "4420", 00:28:09.865 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:09.865 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:09.865 "hdgst": false, 00:28:09.865 "ddgst": false 00:28:09.865 }, 00:28:09.865 "method": "bdev_nvme_attach_controller" 00:28:09.865 }' 00:28:09.865 [2024-06-08 00:53:27.937991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.865 [2024-06-08 00:53:28.002699] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.250 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:11.250 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:28:11.250 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:11.250 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.250 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:11.250 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.250 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 545720 00:28:11.250 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:11.250 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:12.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 545720 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:12.193 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 545344 00:28:12.193 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:12.193 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:12.193 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:12.193 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:12.193 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.194 { 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme$subsystem", 00:28:12.194 "trtype": "$TEST_TRANSPORT", 00:28:12.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "$NVMF_PORT", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.194 "hdgst": ${hdgst:-false}, 00:28:12.194 "ddgst": ${ddgst:-false} 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 } 00:28:12.194 EOF 00:28:12.194 )") 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.194 { 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme$subsystem", 00:28:12.194 "trtype": "$TEST_TRANSPORT", 00:28:12.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "$NVMF_PORT", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.194 "hdgst": ${hdgst:-false}, 00:28:12.194 "ddgst": ${ddgst:-false} 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 } 00:28:12.194 EOF 00:28:12.194 )") 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.194 { 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme$subsystem", 00:28:12.194 "trtype": "$TEST_TRANSPORT", 00:28:12.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "$NVMF_PORT", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.194 "hdgst": ${hdgst:-false}, 00:28:12.194 "ddgst": ${ddgst:-false} 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 } 00:28:12.194 EOF 00:28:12.194 )") 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.194 { 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme$subsystem", 00:28:12.194 "trtype": "$TEST_TRANSPORT", 00:28:12.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "$NVMF_PORT", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.194 "hdgst": ${hdgst:-false}, 00:28:12.194 "ddgst": ${ddgst:-false} 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 } 00:28:12.194 EOF 00:28:12.194 )") 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.194 { 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme$subsystem", 00:28:12.194 "trtype": "$TEST_TRANSPORT", 00:28:12.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "$NVMF_PORT", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.194 "hdgst": ${hdgst:-false}, 00:28:12.194 "ddgst": ${ddgst:-false} 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 } 00:28:12.194 EOF 00:28:12.194 )") 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.194 { 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme$subsystem", 00:28:12.194 "trtype": "$TEST_TRANSPORT", 00:28:12.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "$NVMF_PORT", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.194 "hdgst": ${hdgst:-false}, 00:28:12.194 "ddgst": ${ddgst:-false} 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 } 00:28:12.194 EOF 00:28:12.194 )") 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.194 { 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme$subsystem", 00:28:12.194 "trtype": "$TEST_TRANSPORT", 00:28:12.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "$NVMF_PORT", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.194 "hdgst": ${hdgst:-false}, 00:28:12.194 "ddgst": ${ddgst:-false} 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 } 00:28:12.194 EOF 00:28:12.194 )") 00:28:12.194 [2024-06-08 00:53:30.443435] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:12.194 [2024-06-08 00:53:30.443489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546233 ] 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.194 { 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme$subsystem", 00:28:12.194 "trtype": "$TEST_TRANSPORT", 00:28:12.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "$NVMF_PORT", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.194 "hdgst": ${hdgst:-false}, 00:28:12.194 "ddgst": ${ddgst:-false} 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 } 00:28:12.194 EOF 00:28:12.194 )") 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.194 { 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme$subsystem", 00:28:12.194 "trtype": "$TEST_TRANSPORT", 00:28:12.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "$NVMF_PORT", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.194 "hdgst": ${hdgst:-false}, 00:28:12.194 "ddgst": ${ddgst:-false} 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 } 00:28:12.194 EOF 00:28:12.194 )") 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.194 { 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme$subsystem", 00:28:12.194 "trtype": "$TEST_TRANSPORT", 00:28:12.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "$NVMF_PORT", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.194 "hdgst": ${hdgst:-false}, 00:28:12.194 "ddgst": ${ddgst:-false} 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 } 00:28:12.194 EOF 00:28:12.194 )") 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:12.194 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:12.194 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:12.194 "params": { 00:28:12.194 "name": "Nvme1", 00:28:12.194 "trtype": "tcp", 00:28:12.194 "traddr": "10.0.0.2", 00:28:12.194 "adrfam": "ipv4", 00:28:12.194 "trsvcid": "4420", 00:28:12.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:12.194 "hdgst": false, 00:28:12.194 "ddgst": false 00:28:12.194 }, 00:28:12.194 "method": "bdev_nvme_attach_controller" 00:28:12.194 },{ 00:28:12.195 "params": { 00:28:12.195 "name": "Nvme2", 00:28:12.195 "trtype": "tcp", 00:28:12.195 "traddr": "10.0.0.2", 00:28:12.195 "adrfam": "ipv4", 00:28:12.195 "trsvcid": "4420", 00:28:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:12.195 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:12.195 "hdgst": false, 00:28:12.195 "ddgst": false 00:28:12.195 }, 00:28:12.195 "method": "bdev_nvme_attach_controller" 00:28:12.195 },{ 00:28:12.195 "params": { 00:28:12.195 "name": "Nvme3", 00:28:12.195 "trtype": "tcp", 00:28:12.195 "traddr": "10.0.0.2", 00:28:12.195 "adrfam": "ipv4", 00:28:12.195 "trsvcid": "4420", 00:28:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:12.195 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:12.195 "hdgst": false, 00:28:12.195 "ddgst": false 00:28:12.195 }, 00:28:12.195 "method": "bdev_nvme_attach_controller" 00:28:12.195 },{ 00:28:12.195 "params": { 00:28:12.195 "name": "Nvme4", 00:28:12.195 "trtype": "tcp", 00:28:12.195 "traddr": "10.0.0.2", 00:28:12.195 "adrfam": "ipv4", 00:28:12.195 "trsvcid": "4420", 00:28:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:12.195 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:12.195 "hdgst": false, 00:28:12.195 "ddgst": false 00:28:12.195 }, 00:28:12.195 "method": "bdev_nvme_attach_controller" 00:28:12.195 },{ 00:28:12.195 "params": { 00:28:12.195 "name": "Nvme5", 00:28:12.195 "trtype": "tcp", 00:28:12.195 "traddr": "10.0.0.2", 00:28:12.195 "adrfam": "ipv4", 00:28:12.195 "trsvcid": "4420", 00:28:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:12.195 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:12.195 "hdgst": false, 00:28:12.195 "ddgst": false 00:28:12.195 }, 00:28:12.195 "method": "bdev_nvme_attach_controller" 00:28:12.195 },{ 00:28:12.195 "params": { 00:28:12.195 "name": "Nvme6", 00:28:12.195 "trtype": "tcp", 00:28:12.195 "traddr": "10.0.0.2", 00:28:12.195 "adrfam": "ipv4", 00:28:12.195 "trsvcid": "4420", 00:28:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:12.195 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:12.195 "hdgst": false, 00:28:12.195 "ddgst": false 00:28:12.195 }, 00:28:12.195 "method": "bdev_nvme_attach_controller" 00:28:12.195 },{ 00:28:12.195 "params": { 00:28:12.195 "name": "Nvme7", 00:28:12.195 "trtype": "tcp", 00:28:12.195 "traddr": "10.0.0.2", 00:28:12.195 "adrfam": "ipv4", 00:28:12.195 "trsvcid": "4420", 00:28:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:12.195 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:12.195 "hdgst": false, 00:28:12.195 "ddgst": false 00:28:12.195 }, 00:28:12.195 "method": "bdev_nvme_attach_controller" 00:28:12.195 },{ 00:28:12.195 "params": { 00:28:12.195 "name": "Nvme8", 00:28:12.195 "trtype": "tcp", 00:28:12.195 "traddr": "10.0.0.2", 00:28:12.195 "adrfam": "ipv4", 00:28:12.195 "trsvcid": "4420", 00:28:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:12.195 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:12.195 "hdgst": false, 00:28:12.195 "ddgst": false 00:28:12.195 }, 00:28:12.195 "method": "bdev_nvme_attach_controller" 00:28:12.195 },{ 00:28:12.195 "params": { 00:28:12.195 "name": "Nvme9", 00:28:12.195 "trtype": "tcp", 00:28:12.195 "traddr": "10.0.0.2", 00:28:12.195 "adrfam": "ipv4", 00:28:12.195 "trsvcid": "4420", 00:28:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:12.195 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:12.195 "hdgst": false, 00:28:12.195 "ddgst": false 00:28:12.195 }, 00:28:12.195 "method": "bdev_nvme_attach_controller" 00:28:12.195 },{ 00:28:12.195 "params": { 00:28:12.195 "name": "Nvme10", 00:28:12.195 "trtype": "tcp", 00:28:12.195 "traddr": "10.0.0.2", 00:28:12.195 "adrfam": "ipv4", 00:28:12.195 "trsvcid": "4420", 00:28:12.195 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:12.195 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:12.195 "hdgst": false, 00:28:12.195 "ddgst": false 00:28:12.195 }, 00:28:12.195 "method": "bdev_nvme_attach_controller" 00:28:12.195 }' 00:28:12.456 [2024-06-08 00:53:30.504531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.456 [2024-06-08 00:53:30.568870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.840 Running I/O for 1 seconds... 00:28:15.225 00:28:15.225 Latency(us) 00:28:15.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.225 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.225 Verification LBA range: start 0x0 length 0x400 00:28:15.225 Nvme1n1 : 1.02 250.00 15.63 0.00 0.00 253154.77 17476.27 251658.24 00:28:15.225 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.225 Verification LBA range: start 0x0 length 0x400 00:28:15.225 Nvme2n1 : 1.09 235.35 14.71 0.00 0.00 263860.48 22391.47 244667.73 00:28:15.225 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.225 Verification LBA range: start 0x0 length 0x400 00:28:15.225 Nvme3n1 : 1.12 229.42 14.34 0.00 0.00 266299.52 20316.16 248162.99 00:28:15.225 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.225 Verification LBA range: start 0x0 length 0x400 00:28:15.225 Nvme4n1 : 1.16 220.88 13.80 0.00 0.00 271255.04 22937.60 274377.39 00:28:15.225 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.225 Verification LBA range: start 0x0 length 0x400 00:28:15.225 Nvme5n1 : 1.15 279.38 17.46 0.00 0.00 210268.50 9338.88 255153.49 00:28:15.225 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.225 Verification LBA range: start 0x0 length 0x400 00:28:15.225 Nvme6n1 : 1.17 218.42 13.65 0.00 0.00 265831.25 44127.57 251658.24 00:28:15.225 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.225 Verification LBA range: start 0x0 length 0x400 00:28:15.225 Nvme7n1 : 1.17 273.80 17.11 0.00 0.00 208078.34 19005.44 248162.99 00:28:15.225 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.225 Verification LBA range: start 0x0 length 0x400 00:28:15.225 Nvme8n1 : 1.17 217.96 13.62 0.00 0.00 256760.53 23702.19 276125.01 00:28:15.225 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.225 Verification LBA range: start 0x0 length 0x400 00:28:15.225 Nvme9n1 : 1.18 227.96 14.25 0.00 0.00 238592.41 8082.77 267386.88 00:28:15.225 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.225 Verification LBA range: start 0x0 length 0x400 00:28:15.225 Nvme10n1 : 1.19 269.76 16.86 0.00 0.00 200118.95 19551.57 246415.36 00:28:15.225 =================================================================================================================== 00:28:15.225 Total : 2422.92 151.43 0.00 0.00 240812.30 8082.77 276125.01 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:15.225 rmmod nvme_tcp 00:28:15.225 rmmod nvme_fabrics 00:28:15.225 rmmod nvme_keyring 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 545344 ']' 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 545344 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 545344 ']' 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 545344 00:28:15.225 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:28:15.485 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:15.485 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 545344 00:28:15.485 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:15.485 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:15.485 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 545344' 00:28:15.485 killing process with pid 545344 00:28:15.485 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 545344 00:28:15.485 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 545344 00:28:15.747 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:15.747 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:15.747 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:15.747 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:15.747 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:15.747 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.747 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.747 00:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.661 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:17.661 00:28:17.661 real 0m16.493s 00:28:17.661 user 0m34.298s 00:28:17.661 sys 0m6.533s 00:28:17.661 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:17.661 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.661 ************************************ 00:28:17.661 END TEST nvmf_shutdown_tc1 00:28:17.661 ************************************ 00:28:17.661 00:53:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:17.661 00:53:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:17.661 00:53:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:17.661 00:53:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:17.922 ************************************ 00:28:17.922 START TEST nvmf_shutdown_tc2 00:28:17.922 ************************************ 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.922 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:17.923 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:17.923 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:17.923 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:17.923 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.923 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.923 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.923 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.923 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.923 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:18.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:28:18.188 00:28:18.188 --- 10.0.0.2 ping statistics --- 00:28:18.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.188 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:28:18.188 00:28:18.188 --- 10.0.0.1 ping statistics --- 00:28:18.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.188 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=547523 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 547523 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 547523 ']' 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:18.188 00:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.188 [2024-06-08 00:53:36.408523] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:18.188 [2024-06-08 00:53:36.408588] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.188 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.506 [2024-06-08 00:53:36.495160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:18.506 [2024-06-08 00:53:36.556452] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.506 [2024-06-08 00:53:36.556484] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.506 [2024-06-08 00:53:36.556490] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.506 [2024-06-08 00:53:36.556494] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.506 [2024-06-08 00:53:36.556498] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.506 [2024-06-08 00:53:36.556598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:18.506 [2024-06-08 00:53:36.556755] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:18.506 [2024-06-08 00:53:36.556909] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.506 [2024-06-08 00:53:36.556911] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.078 [2024-06-08 00:53:37.218524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:19.078 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.078 Malloc1 00:28:19.078 [2024-06-08 00:53:37.317231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.078 Malloc2 00:28:19.339 Malloc3 00:28:19.339 Malloc4 00:28:19.339 Malloc5 00:28:19.339 Malloc6 00:28:19.339 Malloc7 00:28:19.339 Malloc8 00:28:19.339 Malloc9 00:28:19.601 Malloc10 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=547780 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 547780 /var/tmp/bdevperf.sock 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 547780 ']' 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.601 { 00:28:19.601 "params": { 00:28:19.601 "name": "Nvme$subsystem", 00:28:19.601 "trtype": "$TEST_TRANSPORT", 00:28:19.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.601 "adrfam": "ipv4", 00:28:19.601 "trsvcid": "$NVMF_PORT", 00:28:19.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.601 "hdgst": ${hdgst:-false}, 00:28:19.601 "ddgst": ${ddgst:-false} 00:28:19.601 }, 00:28:19.601 "method": "bdev_nvme_attach_controller" 00:28:19.601 } 00:28:19.601 EOF 00:28:19.601 )") 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.601 { 00:28:19.601 "params": { 00:28:19.601 "name": "Nvme$subsystem", 00:28:19.601 "trtype": "$TEST_TRANSPORT", 00:28:19.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.601 "adrfam": "ipv4", 00:28:19.601 "trsvcid": "$NVMF_PORT", 00:28:19.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.601 "hdgst": ${hdgst:-false}, 00:28:19.601 "ddgst": ${ddgst:-false} 00:28:19.601 }, 00:28:19.601 "method": "bdev_nvme_attach_controller" 00:28:19.601 } 00:28:19.601 EOF 00:28:19.601 )") 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.601 { 00:28:19.601 "params": { 00:28:19.601 "name": "Nvme$subsystem", 00:28:19.601 "trtype": "$TEST_TRANSPORT", 00:28:19.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.601 "adrfam": "ipv4", 00:28:19.601 "trsvcid": "$NVMF_PORT", 00:28:19.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.601 "hdgst": ${hdgst:-false}, 00:28:19.601 "ddgst": ${ddgst:-false} 00:28:19.601 }, 00:28:19.601 "method": "bdev_nvme_attach_controller" 00:28:19.601 } 00:28:19.601 EOF 00:28:19.601 )") 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.601 { 00:28:19.601 "params": { 00:28:19.601 "name": "Nvme$subsystem", 00:28:19.601 "trtype": "$TEST_TRANSPORT", 00:28:19.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.601 "adrfam": "ipv4", 00:28:19.601 "trsvcid": "$NVMF_PORT", 00:28:19.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.601 "hdgst": ${hdgst:-false}, 00:28:19.601 "ddgst": ${ddgst:-false} 00:28:19.601 }, 00:28:19.601 "method": "bdev_nvme_attach_controller" 00:28:19.601 } 00:28:19.601 EOF 00:28:19.601 )") 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.601 { 00:28:19.601 "params": { 00:28:19.601 "name": "Nvme$subsystem", 00:28:19.601 "trtype": "$TEST_TRANSPORT", 00:28:19.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.601 "adrfam": "ipv4", 00:28:19.601 "trsvcid": "$NVMF_PORT", 00:28:19.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.601 "hdgst": ${hdgst:-false}, 00:28:19.601 "ddgst": ${ddgst:-false} 00:28:19.601 }, 00:28:19.601 "method": "bdev_nvme_attach_controller" 00:28:19.601 } 00:28:19.601 EOF 00:28:19.601 )") 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.601 { 00:28:19.601 "params": { 00:28:19.601 "name": "Nvme$subsystem", 00:28:19.601 "trtype": "$TEST_TRANSPORT", 00:28:19.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.601 "adrfam": "ipv4", 00:28:19.601 "trsvcid": "$NVMF_PORT", 00:28:19.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.601 "hdgst": ${hdgst:-false}, 00:28:19.601 "ddgst": ${ddgst:-false} 00:28:19.601 }, 00:28:19.601 "method": "bdev_nvme_attach_controller" 00:28:19.601 } 00:28:19.601 EOF 00:28:19.601 )") 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.601 { 00:28:19.601 "params": { 00:28:19.601 "name": "Nvme$subsystem", 00:28:19.601 "trtype": "$TEST_TRANSPORT", 00:28:19.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.601 "adrfam": "ipv4", 00:28:19.601 "trsvcid": "$NVMF_PORT", 00:28:19.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.601 "hdgst": ${hdgst:-false}, 00:28:19.601 "ddgst": ${ddgst:-false} 00:28:19.601 }, 00:28:19.601 "method": "bdev_nvme_attach_controller" 00:28:19.601 } 00:28:19.601 EOF 00:28:19.601 )") 00:28:19.601 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:19.602 [2024-06-08 00:53:37.775175] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:19.602 [2024-06-08 00:53:37.775275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547780 ] 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.602 { 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme$subsystem", 00:28:19.602 "trtype": "$TEST_TRANSPORT", 00:28:19.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "$NVMF_PORT", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.602 "hdgst": ${hdgst:-false}, 00:28:19.602 "ddgst": ${ddgst:-false} 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 } 00:28:19.602 EOF 00:28:19.602 )") 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.602 { 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme$subsystem", 00:28:19.602 "trtype": "$TEST_TRANSPORT", 00:28:19.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "$NVMF_PORT", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.602 "hdgst": ${hdgst:-false}, 00:28:19.602 "ddgst": ${ddgst:-false} 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 } 00:28:19.602 EOF 00:28:19.602 )") 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.602 { 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme$subsystem", 00:28:19.602 "trtype": "$TEST_TRANSPORT", 00:28:19.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "$NVMF_PORT", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.602 "hdgst": ${hdgst:-false}, 00:28:19.602 "ddgst": ${ddgst:-false} 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 } 00:28:19.602 EOF 00:28:19.602 )") 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:19.602 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme1", 00:28:19.602 "trtype": "tcp", 00:28:19.602 "traddr": "10.0.0.2", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "4420", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:19.602 "hdgst": false, 00:28:19.602 "ddgst": false 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 },{ 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme2", 00:28:19.602 "trtype": "tcp", 00:28:19.602 "traddr": "10.0.0.2", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "4420", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:19.602 "hdgst": false, 00:28:19.602 "ddgst": false 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 },{ 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme3", 00:28:19.602 "trtype": "tcp", 00:28:19.602 "traddr": "10.0.0.2", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "4420", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:19.602 "hdgst": false, 00:28:19.602 "ddgst": false 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 },{ 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme4", 00:28:19.602 "trtype": "tcp", 00:28:19.602 "traddr": "10.0.0.2", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "4420", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:19.602 "hdgst": false, 00:28:19.602 "ddgst": false 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 },{ 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme5", 00:28:19.602 "trtype": "tcp", 00:28:19.602 "traddr": "10.0.0.2", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "4420", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:19.602 "hdgst": false, 00:28:19.602 "ddgst": false 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 },{ 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme6", 00:28:19.602 "trtype": "tcp", 00:28:19.602 "traddr": "10.0.0.2", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "4420", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:19.602 "hdgst": false, 00:28:19.602 "ddgst": false 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 },{ 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme7", 00:28:19.602 "trtype": "tcp", 00:28:19.602 "traddr": "10.0.0.2", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "4420", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:19.602 "hdgst": false, 00:28:19.602 "ddgst": false 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 },{ 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme8", 00:28:19.602 "trtype": "tcp", 00:28:19.602 "traddr": "10.0.0.2", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "4420", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:19.602 "hdgst": false, 00:28:19.602 "ddgst": false 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 },{ 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme9", 00:28:19.602 "trtype": "tcp", 00:28:19.602 "traddr": "10.0.0.2", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "4420", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:19.602 "hdgst": false, 00:28:19.602 "ddgst": false 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 },{ 00:28:19.602 "params": { 00:28:19.602 "name": "Nvme10", 00:28:19.602 "trtype": "tcp", 00:28:19.602 "traddr": "10.0.0.2", 00:28:19.602 "adrfam": "ipv4", 00:28:19.602 "trsvcid": "4420", 00:28:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:19.602 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:19.602 "hdgst": false, 00:28:19.602 "ddgst": false 00:28:19.602 }, 00:28:19.602 "method": "bdev_nvme_attach_controller" 00:28:19.602 }' 00:28:19.602 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.602 [2024-06-08 00:53:37.837041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.863 [2024-06-08 00:53:37.901847] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.248 Running I/O for 10 seconds... 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:21.248 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:21.508 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:21.508 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:21.508 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:21.508 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:21.508 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.508 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.508 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.508 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:21.508 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:21.508 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:21.768 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:21.768 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:21.768 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:21.768 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:21.768 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.768 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.768 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.768 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:21.769 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:21.769 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:21.769 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:21.769 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:21.769 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 547780 00:28:21.769 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 547780 ']' 00:28:21.769 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 547780 00:28:21.769 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:28:21.769 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:21.769 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 547780 00:28:22.029 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:22.029 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:22.029 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 547780' 00:28:22.029 killing process with pid 547780 00:28:22.029 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 547780 00:28:22.029 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 547780 00:28:22.029 Received shutdown signal, test time was about 0.965440 seconds 00:28:22.029 00:28:22.029 Latency(us) 00:28:22.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.029 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.029 Verification LBA range: start 0x0 length 0x400 00:28:22.029 Nvme1n1 : 0.96 267.18 16.70 0.00 0.00 235989.97 14854.83 232434.35 00:28:22.029 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.029 Verification LBA range: start 0x0 length 0x400 00:28:22.029 Nvme2n1 : 0.95 269.39 16.84 0.00 0.00 229694.72 21517.65 237677.23 00:28:22.029 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.029 Verification LBA range: start 0x0 length 0x400 00:28:22.029 Nvme3n1 : 0.93 206.24 12.89 0.00 0.00 293673.24 22719.15 253405.87 00:28:22.029 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.029 Verification LBA range: start 0x0 length 0x400 00:28:22.029 Nvme4n1 : 0.95 203.15 12.70 0.00 0.00 291163.02 22828.37 260396.37 00:28:22.029 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.029 Verification LBA range: start 0x0 length 0x400 00:28:22.029 Nvme5n1 : 0.94 203.64 12.73 0.00 0.00 284072.96 24794.45 251658.24 00:28:22.029 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.029 Verification LBA range: start 0x0 length 0x400 00:28:22.029 Nvme6n1 : 0.96 265.41 16.59 0.00 0.00 213830.40 25340.59 253405.87 00:28:22.029 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.029 Verification LBA range: start 0x0 length 0x400 00:28:22.029 Nvme7n1 : 0.95 270.58 16.91 0.00 0.00 204345.39 20206.93 246415.36 00:28:22.029 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.029 Verification LBA range: start 0x0 length 0x400 00:28:22.029 Nvme8n1 : 0.96 265.67 16.60 0.00 0.00 203851.09 20206.93 248162.99 00:28:22.029 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.029 Verification LBA range: start 0x0 length 0x400 00:28:22.029 Nvme9n1 : 0.95 201.47 12.59 0.00 0.00 262388.05 24029.87 277872.64 00:28:22.029 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.029 Verification LBA range: start 0x0 length 0x400 00:28:22.029 Nvme10n1 : 0.93 205.79 12.86 0.00 0.00 248800.71 22937.60 248162.99 00:28:22.029 =================================================================================================================== 00:28:22.029 Total : 2358.53 147.41 0.00 0.00 242604.01 14854.83 277872.64 00:28:22.029 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 547523 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:23.412 rmmod nvme_tcp 00:28:23.412 rmmod nvme_fabrics 00:28:23.412 rmmod nvme_keyring 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 547523 ']' 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 547523 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 547523 ']' 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 547523 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:28:23.412 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 547523 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 547523' 00:28:23.413 killing process with pid 547523 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 547523 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 547523 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.413 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:25.957 00:28:25.957 real 0m7.768s 00:28:25.957 user 0m23.039s 00:28:25.957 sys 0m1.251s 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.957 ************************************ 00:28:25.957 END TEST nvmf_shutdown_tc2 00:28:25.957 ************************************ 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:25.957 ************************************ 00:28:25.957 START TEST nvmf_shutdown_tc3 00:28:25.957 ************************************ 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:25.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:25.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:25.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.957 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:25.958 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:25.958 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:25.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:28:25.958 00:28:25.958 --- 10.0.0.2 ping statistics --- 00:28:25.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.958 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:28:25.958 00:28:25.958 --- 10.0.0.1 ping statistics --- 00:28:25.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.958 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=549055 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 549055 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 549055 ']' 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:25.958 00:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.218 [2024-06-08 00:53:44.246365] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:26.218 [2024-06-08 00:53:44.246446] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.218 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.218 [2024-06-08 00:53:44.331811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.218 [2024-06-08 00:53:44.392858] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.218 [2024-06-08 00:53:44.392903] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.218 [2024-06-08 00:53:44.392909] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.218 [2024-06-08 00:53:44.392913] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.218 [2024-06-08 00:53:44.392917] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.218 [2024-06-08 00:53:44.393026] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.218 [2024-06-08 00:53:44.393185] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.218 [2024-06-08 00:53:44.393345] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.218 [2024-06-08 00:53:44.393347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.790 [2024-06-08 00:53:45.062717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:26.790 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.051 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:27.051 Malloc1 00:28:27.051 [2024-06-08 00:53:45.161455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.051 Malloc2 00:28:27.051 Malloc3 00:28:27.051 Malloc4 00:28:27.051 Malloc5 00:28:27.051 Malloc6 00:28:27.312 Malloc7 00:28:27.312 Malloc8 00:28:27.312 Malloc9 00:28:27.312 Malloc10 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=549435 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 549435 /var/tmp/bdevperf.sock 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 549435 ']' 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:27.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.312 { 00:28:27.312 "params": { 00:28:27.312 "name": "Nvme$subsystem", 00:28:27.312 "trtype": "$TEST_TRANSPORT", 00:28:27.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.312 "adrfam": "ipv4", 00:28:27.312 "trsvcid": "$NVMF_PORT", 00:28:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.312 "hdgst": ${hdgst:-false}, 00:28:27.312 "ddgst": ${ddgst:-false} 00:28:27.312 }, 00:28:27.312 "method": "bdev_nvme_attach_controller" 00:28:27.312 } 00:28:27.312 EOF 00:28:27.312 )") 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.312 { 00:28:27.312 "params": { 00:28:27.312 "name": "Nvme$subsystem", 00:28:27.312 "trtype": "$TEST_TRANSPORT", 00:28:27.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.312 "adrfam": "ipv4", 00:28:27.312 "trsvcid": "$NVMF_PORT", 00:28:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.312 "hdgst": ${hdgst:-false}, 00:28:27.312 "ddgst": ${ddgst:-false} 00:28:27.312 }, 00:28:27.312 "method": "bdev_nvme_attach_controller" 00:28:27.312 } 00:28:27.312 EOF 00:28:27.312 )") 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:27.312 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.313 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.313 { 00:28:27.313 "params": { 00:28:27.313 "name": "Nvme$subsystem", 00:28:27.313 "trtype": "$TEST_TRANSPORT", 00:28:27.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.313 "adrfam": "ipv4", 00:28:27.313 "trsvcid": "$NVMF_PORT", 00:28:27.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.313 "hdgst": ${hdgst:-false}, 00:28:27.313 "ddgst": ${ddgst:-false} 00:28:27.313 }, 00:28:27.313 "method": "bdev_nvme_attach_controller" 00:28:27.313 } 00:28:27.313 EOF 00:28:27.313 )") 00:28:27.313 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:27.313 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.313 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.313 { 00:28:27.313 "params": { 00:28:27.313 "name": "Nvme$subsystem", 00:28:27.313 "trtype": "$TEST_TRANSPORT", 00:28:27.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.313 "adrfam": "ipv4", 00:28:27.313 "trsvcid": "$NVMF_PORT", 00:28:27.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.313 "hdgst": ${hdgst:-false}, 00:28:27.313 "ddgst": ${ddgst:-false} 00:28:27.313 }, 00:28:27.313 "method": "bdev_nvme_attach_controller" 00:28:27.313 } 00:28:27.313 EOF 00:28:27.313 )") 00:28:27.313 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:27.313 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.313 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.313 { 00:28:27.313 "params": { 00:28:27.313 "name": "Nvme$subsystem", 00:28:27.313 "trtype": "$TEST_TRANSPORT", 00:28:27.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.313 "adrfam": "ipv4", 00:28:27.313 "trsvcid": "$NVMF_PORT", 00:28:27.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.313 "hdgst": ${hdgst:-false}, 00:28:27.313 "ddgst": ${ddgst:-false} 00:28:27.313 }, 00:28:27.313 "method": "bdev_nvme_attach_controller" 00:28:27.313 } 00:28:27.313 EOF 00:28:27.313 )") 00:28:27.313 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.574 { 00:28:27.574 "params": { 00:28:27.574 "name": "Nvme$subsystem", 00:28:27.574 "trtype": "$TEST_TRANSPORT", 00:28:27.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.574 "adrfam": "ipv4", 00:28:27.574 "trsvcid": "$NVMF_PORT", 00:28:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.574 "hdgst": ${hdgst:-false}, 00:28:27.574 "ddgst": ${ddgst:-false} 00:28:27.574 }, 00:28:27.574 "method": "bdev_nvme_attach_controller" 00:28:27.574 } 00:28:27.574 EOF 00:28:27.574 )") 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:27.574 [2024-06-08 00:53:45.601350] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:27.574 [2024-06-08 00:53:45.601406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid549435 ] 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.574 { 00:28:27.574 "params": { 00:28:27.574 "name": "Nvme$subsystem", 00:28:27.574 "trtype": "$TEST_TRANSPORT", 00:28:27.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.574 "adrfam": "ipv4", 00:28:27.574 "trsvcid": "$NVMF_PORT", 00:28:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.574 "hdgst": ${hdgst:-false}, 00:28:27.574 "ddgst": ${ddgst:-false} 00:28:27.574 }, 00:28:27.574 "method": "bdev_nvme_attach_controller" 00:28:27.574 } 00:28:27.574 EOF 00:28:27.574 )") 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.574 { 00:28:27.574 "params": { 00:28:27.574 "name": "Nvme$subsystem", 00:28:27.574 "trtype": "$TEST_TRANSPORT", 00:28:27.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.574 "adrfam": "ipv4", 00:28:27.574 "trsvcid": "$NVMF_PORT", 00:28:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.574 "hdgst": ${hdgst:-false}, 00:28:27.574 "ddgst": ${ddgst:-false} 00:28:27.574 }, 00:28:27.574 "method": "bdev_nvme_attach_controller" 00:28:27.574 } 00:28:27.574 EOF 00:28:27.574 )") 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.574 { 00:28:27.574 "params": { 00:28:27.574 "name": "Nvme$subsystem", 00:28:27.574 "trtype": "$TEST_TRANSPORT", 00:28:27.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.574 "adrfam": "ipv4", 00:28:27.574 "trsvcid": "$NVMF_PORT", 00:28:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.574 "hdgst": ${hdgst:-false}, 00:28:27.574 "ddgst": ${ddgst:-false} 00:28:27.574 }, 00:28:27.574 "method": "bdev_nvme_attach_controller" 00:28:27.574 } 00:28:27.574 EOF 00:28:27.574 )") 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:27.574 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.574 { 00:28:27.574 "params": { 00:28:27.574 "name": "Nvme$subsystem", 00:28:27.574 "trtype": "$TEST_TRANSPORT", 00:28:27.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.574 "adrfam": "ipv4", 00:28:27.574 "trsvcid": "$NVMF_PORT", 00:28:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.574 "hdgst": ${hdgst:-false}, 00:28:27.574 "ddgst": ${ddgst:-false} 00:28:27.574 }, 00:28:27.574 "method": "bdev_nvme_attach_controller" 00:28:27.574 } 00:28:27.574 EOF 00:28:27.574 )") 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:27.574 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:27.574 "params": { 00:28:27.574 "name": "Nvme1", 00:28:27.574 "trtype": "tcp", 00:28:27.574 "traddr": "10.0.0.2", 00:28:27.574 "adrfam": "ipv4", 00:28:27.574 "trsvcid": "4420", 00:28:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:27.574 "hdgst": false, 00:28:27.574 "ddgst": false 00:28:27.574 }, 00:28:27.574 "method": "bdev_nvme_attach_controller" 00:28:27.574 },{ 00:28:27.574 "params": { 00:28:27.574 "name": "Nvme2", 00:28:27.574 "trtype": "tcp", 00:28:27.574 "traddr": "10.0.0.2", 00:28:27.574 "adrfam": "ipv4", 00:28:27.574 "trsvcid": "4420", 00:28:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:27.574 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:27.574 "hdgst": false, 00:28:27.574 "ddgst": false 00:28:27.574 }, 00:28:27.574 "method": "bdev_nvme_attach_controller" 00:28:27.574 },{ 00:28:27.574 "params": { 00:28:27.574 "name": "Nvme3", 00:28:27.574 "trtype": "tcp", 00:28:27.574 "traddr": "10.0.0.2", 00:28:27.574 "adrfam": "ipv4", 00:28:27.574 "trsvcid": "4420", 00:28:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:27.574 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:27.574 "hdgst": false, 00:28:27.574 "ddgst": false 00:28:27.574 }, 00:28:27.574 "method": "bdev_nvme_attach_controller" 00:28:27.574 },{ 00:28:27.574 "params": { 00:28:27.574 "name": "Nvme4", 00:28:27.574 "trtype": "tcp", 00:28:27.574 "traddr": "10.0.0.2", 00:28:27.574 "adrfam": "ipv4", 00:28:27.574 "trsvcid": "4420", 00:28:27.574 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:27.575 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:27.575 "hdgst": false, 00:28:27.575 "ddgst": false 00:28:27.575 }, 00:28:27.575 "method": "bdev_nvme_attach_controller" 00:28:27.575 },{ 00:28:27.575 "params": { 00:28:27.575 "name": "Nvme5", 00:28:27.575 "trtype": "tcp", 00:28:27.575 "traddr": "10.0.0.2", 00:28:27.575 "adrfam": "ipv4", 00:28:27.575 "trsvcid": "4420", 00:28:27.575 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:27.575 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:27.575 "hdgst": false, 00:28:27.575 "ddgst": false 00:28:27.575 }, 00:28:27.575 "method": "bdev_nvme_attach_controller" 00:28:27.575 },{ 00:28:27.575 "params": { 00:28:27.575 "name": "Nvme6", 00:28:27.575 "trtype": "tcp", 00:28:27.575 "traddr": "10.0.0.2", 00:28:27.575 "adrfam": "ipv4", 00:28:27.575 "trsvcid": "4420", 00:28:27.575 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:27.575 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:27.575 "hdgst": false, 00:28:27.575 "ddgst": false 00:28:27.575 }, 00:28:27.575 "method": "bdev_nvme_attach_controller" 00:28:27.575 },{ 00:28:27.575 "params": { 00:28:27.575 "name": "Nvme7", 00:28:27.575 "trtype": "tcp", 00:28:27.575 "traddr": "10.0.0.2", 00:28:27.575 "adrfam": "ipv4", 00:28:27.575 "trsvcid": "4420", 00:28:27.575 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:27.575 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:27.575 "hdgst": false, 00:28:27.575 "ddgst": false 00:28:27.575 }, 00:28:27.575 "method": "bdev_nvme_attach_controller" 00:28:27.575 },{ 00:28:27.575 "params": { 00:28:27.575 "name": "Nvme8", 00:28:27.575 "trtype": "tcp", 00:28:27.575 "traddr": "10.0.0.2", 00:28:27.575 "adrfam": "ipv4", 00:28:27.575 "trsvcid": "4420", 00:28:27.575 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:27.575 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:27.575 "hdgst": false, 00:28:27.575 "ddgst": false 00:28:27.575 }, 00:28:27.575 "method": "bdev_nvme_attach_controller" 00:28:27.575 },{ 00:28:27.575 "params": { 00:28:27.575 "name": "Nvme9", 00:28:27.575 "trtype": "tcp", 00:28:27.575 "traddr": "10.0.0.2", 00:28:27.575 "adrfam": "ipv4", 00:28:27.575 "trsvcid": "4420", 00:28:27.575 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:27.575 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:27.575 "hdgst": false, 00:28:27.575 "ddgst": false 00:28:27.575 }, 00:28:27.575 "method": "bdev_nvme_attach_controller" 00:28:27.575 },{ 00:28:27.575 "params": { 00:28:27.575 "name": "Nvme10", 00:28:27.575 "trtype": "tcp", 00:28:27.575 "traddr": "10.0.0.2", 00:28:27.575 "adrfam": "ipv4", 00:28:27.575 "trsvcid": "4420", 00:28:27.575 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:27.575 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:27.575 "hdgst": false, 00:28:27.575 "ddgst": false 00:28:27.575 }, 00:28:27.575 "method": "bdev_nvme_attach_controller" 00:28:27.575 }' 00:28:27.575 [2024-06-08 00:53:45.660878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.575 [2024-06-08 00:53:45.725780] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.487 Running I/O for 10 seconds... 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:29.487 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:29.748 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:29.748 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:29.748 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:29.748 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:29.748 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:29.748 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:29.748 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:29.748 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:29.748 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:29.748 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=193 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 193 -ge 100 ']' 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 549055 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 549055 ']' 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 549055 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 549055 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 549055' 00:28:30.017 killing process with pid 549055 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 549055 00:28:30.017 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 549055 00:28:30.017 [2024-06-08 00:53:48.257793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257844] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257853] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257867] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257880] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257889] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257898] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257907] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257912] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257926] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257944] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257953] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257962] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257967] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257971] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257976] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257984] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257989] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.257998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258002] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258015] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258024] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.017 [2024-06-08 00:53:48.258050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258054] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258060] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258065] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258069] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258073] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258086] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258095] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258100] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258114] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.258119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d2e0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259015] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259045] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259059] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259064] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259073] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259083] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259114] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259138] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259143] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259158] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259163] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259167] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259172] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259177] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259182] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259186] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259191] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259196] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259200] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259204] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259209] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259214] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259218] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259223] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259228] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259239] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259244] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259248] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259253] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259257] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259267] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259273] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259279] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259284] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259293] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259302] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259306] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259315] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259325] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259330] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259339] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216fcc0 is same with the state(5) to be set 00:28:30.018 [2024-06-08 00:53:48.259345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.018 [2024-06-08 00:53:48.259379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.018 [2024-06-08 00:53:48.259390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.018 [2024-06-08 00:53:48.259397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.018 [2024-06-08 00:53:48.259420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.019 [2024-06-08 00:53:48.259428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.019 [2024-06-08 00:53:48.259436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.019 [2024-06-08 00:53:48.259443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.019 [2024-06-08 00:53:48.259450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d140 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260321] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260349] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260354] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260358] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260363] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260368] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260373] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260377] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260382] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260386] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260391] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260395] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260400] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260410] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260415] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260422] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260427] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260431] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260441] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260445] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260450] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260455] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260463] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260468] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260472] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260476] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260481] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260486] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260493] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260497] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260502] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260507] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260516] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260520] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260524] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260529] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260534] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260538] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260543] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260553] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260558] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260566] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260571] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260575] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260580] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260584] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260589] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260599] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260603] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.260607] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d780 is same with the state(5) to be set 00:28:30.019 [2024-06-08 00:53:48.261613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261637] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261642] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261648] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261658] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261663] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261668] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261677] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261681] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261690] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261707] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261717] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261725] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261749] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261754] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261758] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261767] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261772] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261776] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261781] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261785] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261794] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261799] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261803] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261808] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261812] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261835] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261844] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261854] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261889] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261899] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.261921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216dc20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.263350] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216e580 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.263713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.020 [2024-06-08 00:53:48.263736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.020 [2024-06-08 00:53:48.263755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.020 [2024-06-08 00:53:48.263772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.020 [2024-06-08 00:53:48.263782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.020 [2024-06-08 00:53:48.263790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.020 [2024-06-08 00:53:48.263800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.020 [2024-06-08 00:53:48.263807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.020 [2024-06-08 00:53:48.263816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.020 [2024-06-08 00:53:48.263823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.020 [2024-06-08 00:53:48.263833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.020 [2024-06-08 00:53:48.263831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.263841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.020 [2024-06-08 00:53:48.263846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.263851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.263851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.020 [2024-06-08 00:53:48.263858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.263860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.020 [2024-06-08 00:53:48.263864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.020 [2024-06-08 00:53:48.263869] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.263874] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-08 00:53:48.263880] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263887] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.263893] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263898] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 [2024-06-08 00:53:48.263904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263910] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.263915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-08 00:53:48.263920] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263928] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.263932] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with [2024-06-08 00:53:48.263939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:30.021 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 [2024-06-08 00:53:48.263947] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263952] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with [2024-06-08 00:53:48.263950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1the state(5) to be set 00:28:30.021 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.263958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 [2024-06-08 00:53:48.263963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.263973] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263978] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 [2024-06-08 00:53:48.263985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with [2024-06-08 00:53:48.263990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1the state(5) to be set 00:28:30.021 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.263998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.263999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 [2024-06-08 00:53:48.264007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1[2024-06-08 00:53:48.264013] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 [2024-06-08 00:53:48.264025] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264030] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.264034] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 [2024-06-08 00:53:48.264046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.264051] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with [2024-06-08 00:53:48.264056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:30.021 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 [2024-06-08 00:53:48.264063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with [2024-06-08 00:53:48.264067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1the state(5) to be set 00:28:30.021 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.264075] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 [2024-06-08 00:53:48.264080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.264089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.021 [2024-06-08 00:53:48.264101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.021 [2024-06-08 00:53:48.264107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.021 [2024-06-08 00:53:48.264112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-08 00:53:48.264117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264124] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264139] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264145] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264149] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264154] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with [2024-06-08 00:53:48.264153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:30.022 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with [2024-06-08 00:53:48.264165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1the state(5) to be set 00:28:30.022 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264178] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264183] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264187] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264194] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with [2024-06-08 00:53:48.264193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:30.022 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216ea20 is same with the state(5) to be set 00:28:30.022 [2024-06-08 00:53:48.264206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.022 [2024-06-08 00:53:48.264452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.022 [2024-06-08 00:53:48.264462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.264875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.264928] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e94780 was disconnected and freed. reset controller. 00:28:30.023 [2024-06-08 00:53:48.264980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.264994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.264998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.265003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.265013] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265018] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.265023] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.265033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.265043] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-08 00:53:48.265048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.265067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-08 00:53:48.265072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265079] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.023 [2024-06-08 00:53:48.265084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.023 [2024-06-08 00:53:48.265094] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.023 [2024-06-08 00:53:48.265100] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265120] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265130] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265135] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265140] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265145] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265165] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265177] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265182] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265187] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with [2024-06-08 00:53:48.265198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1the state(5) to be set 00:28:30.024 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265211] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-08 00:53:48.265226] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265241] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265246] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265252] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265266] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265271] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with [2024-06-08 00:53:48.265271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1the state(5) to be set 00:28:30.024 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265283] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265293] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265298] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with [2024-06-08 00:53:48.265298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:30.024 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265310] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with [2024-06-08 00:53:48.265309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1the state(5) to be set 00:28:30.024 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265321] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216eee0 is same with the state(5) to be set 00:28:30.024 [2024-06-08 00:53:48.265328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.024 [2024-06-08 00:53:48.265454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.024 [2024-06-08 00:53:48.265461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f380 is same with [2024-06-08 00:53:48.265922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:30.025 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:12[2024-06-08 00:53:48.265936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f380 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 the state(5) to be set 00:28:30.025 [2024-06-08 00:53:48.265948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.265990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.265997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.266006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.266013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.266022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.266030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.025 [2024-06-08 00:53:48.266039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.025 [2024-06-08 00:53:48.266047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.026 [2024-06-08 00:53:48.266057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.026 [2024-06-08 00:53:48.266064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.026 [2024-06-08 00:53:48.266073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.026 [2024-06-08 00:53:48.266080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.026 [2024-06-08 00:53:48.266089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.026 [2024-06-08 00:53:48.266096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.026 [2024-06-08 00:53:48.266106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.026 [2024-06-08 00:53:48.266107] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with [2024-06-08 00:53:48.266113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:30.026 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.026 [2024-06-08 00:53:48.266123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266141] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266160] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e95c50 was disconnected and fr[2024-06-08 00:53:48.266161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with eed. reset controller. 00:28:30.026 the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266169] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266211] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266359] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266425] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266480] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266528] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266579] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266628] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266626] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.026 [2024-06-08 00:53:48.266677] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266782] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.266981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267081] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267130] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267496] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267545] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267595] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267696] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267946] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.267995] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.268049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.268101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.268151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.268201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.268252] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.268302] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.026 [2024-06-08 00:53:48.268353] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268404] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268475] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268527] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268579] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268682] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268890] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.268941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216f820 is same with the state(5) to be set 00:28:30.027 [2024-06-08 00:53:48.269362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.269382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.269397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.269411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.269423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.269432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.269443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.269451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.269463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.269487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.269541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.269586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.269640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.269686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.269752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.269802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.269854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.269901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.269951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.270004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.027 [2024-06-08 00:53:48.285927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.027 [2024-06-08 00:53:48.285936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.285944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.285954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.285961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.285971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.285980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.285990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.285998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.028 [2024-06-08 00:53:48.286486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286561] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d47940 was disconnected and freed. reset controller. 00:28:30.028 [2024-06-08 00:53:48.286689] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.028 [2024-06-08 00:53:48.286711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:30.028 [2024-06-08 00:53:48.286763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:30.028 [2024-06-08 00:53:48.286803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7bf40 (9): Bad file descriptor 00:28:30.028 [2024-06-08 00:53:48.286817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17930 (9): Bad file descriptor 00:28:30.028 [2024-06-08 00:53:48.286859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.028 [2024-06-08 00:53:48.286869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.028 [2024-06-08 00:53:48.286885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.028 [2024-06-08 00:53:48.286902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.028 [2024-06-08 00:53:48.286917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.028 [2024-06-08 00:53:48.286924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b790 is same with the state(5) to be set 00:28:30.029 [2024-06-08 00:53:48.286948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.286957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.286966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.286978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.286987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.286994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef1eb0 is same with the state(5) to be set 00:28:30.029 [2024-06-08 00:53:48.287037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f028d0 is same with the state(5) to be set 00:28:30.029 [2024-06-08 00:53:48.287122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f16a00 is same with the state(5) to be set 00:28:30.029 [2024-06-08 00:53:48.287208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852610 is same with the state(5) to be set 00:28:30.029 [2024-06-08 00:53:48.287290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4d140 (9): Bad file descriptor 00:28:30.029 [2024-06-08 00:53:48.287314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f175a0 is same with the state(5) to be set 00:28:30.029 [2024-06-08 00:53:48.287398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.029 [2024-06-08 00:53:48.287464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.029 [2024-06-08 00:53:48.287474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef1cd0 is same with the state(5) to be set 00:28:30.029 [2024-06-08 00:53:48.287549] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.029 [2024-06-08 00:53:48.289063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:30.029 [2024-06-08 00:53:48.289092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef1eb0 (9): Bad file descriptor 00:28:30.029 [2024-06-08 00:53:48.289166] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.029 [2024-06-08 00:53:48.290292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.029 [2024-06-08 00:53:48.290314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17930 with addr=10.0.0.2, port=4420 00:28:30.029 [2024-06-08 00:53:48.290323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17930 is same with the state(5) to be set 00:28:30.029 [2024-06-08 00:53:48.290837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.029 [2024-06-08 00:53:48.290876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7bf40 with addr=10.0.0.2, port=4420 00:28:30.029 [2024-06-08 00:53:48.290888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7bf40 is same with the state(5) to be set 00:28:30.029 [2024-06-08 00:53:48.291010] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.029 [2024-06-08 00:53:48.291055] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.029 [2024-06-08 00:53:48.291879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.029 [2024-06-08 00:53:48.291916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef1eb0 with addr=10.0.0.2, port=4420 00:28:30.029 [2024-06-08 00:53:48.291929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef1eb0 is same with the state(5) to be set 00:28:30.029 [2024-06-08 00:53:48.291945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17930 (9): Bad file descriptor 00:28:30.029 [2024-06-08 00:53:48.291958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7bf40 (9): Bad file descriptor 00:28:30.029 [2024-06-08 00:53:48.292089] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:30.029 [2024-06-08 00:53:48.292113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef1eb0 (9): Bad file descriptor 00:28:30.029 [2024-06-08 00:53:48.292123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:30.029 [2024-06-08 00:53:48.292131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:30.029 [2024-06-08 00:53:48.292140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:30.029 [2024-06-08 00:53:48.292154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:30.029 [2024-06-08 00:53:48.292162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:30.029 [2024-06-08 00:53:48.292169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:30.030 [2024-06-08 00:53:48.292220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.030 [2024-06-08 00:53:48.292229] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.030 [2024-06-08 00:53:48.292236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:30.030 [2024-06-08 00:53:48.292243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:30.030 [2024-06-08 00:53:48.292255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:30.030 [2024-06-08 00:53:48.292295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.299 [2024-06-08 00:53:48.296760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5b790 (9): Bad file descriptor 00:28:30.299 [2024-06-08 00:53:48.296788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f028d0 (9): Bad file descriptor 00:28:30.299 [2024-06-08 00:53:48.296806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f16a00 (9): Bad file descriptor 00:28:30.299 [2024-06-08 00:53:48.296823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1852610 (9): Bad file descriptor 00:28:30.299 [2024-06-08 00:53:48.296845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f175a0 (9): Bad file descriptor 00:28:30.299 [2024-06-08 00:53:48.296860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef1cd0 (9): Bad file descriptor 00:28:30.299 [2024-06-08 00:53:48.296966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.296979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.296996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.299 [2024-06-08 00:53:48.297202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.299 [2024-06-08 00:53:48.297212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.297984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.297994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.298001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.298012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.298019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.298029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.298037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.298048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.298054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.298065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.298073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.298083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.298090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.298099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.300 [2024-06-08 00:53:48.298107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.300 [2024-06-08 00:53:48.298116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13b40 is same with the state(5) to be set 00:28:30.300 [2024-06-08 00:53:48.299480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.300 [2024-06-08 00:53:48.300024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-06-08 00:53:48.300042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4d140 with addr=10.0.0.2, port=4420 00:28:30.300 [2024-06-08 00:53:48.300051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d140 is same with the state(5) to be set 00:28:30.300 [2024-06-08 00:53:48.300349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:30.300 [2024-06-08 00:53:48.300362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:30.300 [2024-06-08 00:53:48.300383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4d140 (9): Bad file descriptor 00:28:30.300 [2024-06-08 00:53:48.300836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-06-08 00:53:48.300851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7bf40 with addr=10.0.0.2, port=4420 00:28:30.300 [2024-06-08 00:53:48.300859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7bf40 is same with the state(5) to be set 00:28:30.300 [2024-06-08 00:53:48.301279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.300 [2024-06-08 00:53:48.301291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17930 with addr=10.0.0.2, port=4420 00:28:30.300 [2024-06-08 00:53:48.301298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17930 is same with the state(5) to be set 00:28:30.301 [2024-06-08 00:53:48.301306] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.301 [2024-06-08 00:53:48.301312] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.301 [2024-06-08 00:53:48.301320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.301 [2024-06-08 00:53:48.301365] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.301 [2024-06-08 00:53:48.301374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7bf40 (9): Bad file descriptor 00:28:30.301 [2024-06-08 00:53:48.301387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17930 (9): Bad file descriptor 00:28:30.301 [2024-06-08 00:53:48.301438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:30.301 [2024-06-08 00:53:48.301447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:30.301 [2024-06-08 00:53:48.301454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:30.301 [2024-06-08 00:53:48.301465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:30.301 [2024-06-08 00:53:48.301472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:30.301 [2024-06-08 00:53:48.301479] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:30.301 [2024-06-08 00:53:48.301516] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:30.301 [2024-06-08 00:53:48.301527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.301 [2024-06-08 00:53:48.301533] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.301 [2024-06-08 00:53:48.301782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.301 [2024-06-08 00:53:48.301795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef1eb0 with addr=10.0.0.2, port=4420 00:28:30.301 [2024-06-08 00:53:48.301803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef1eb0 is same with the state(5) to be set 00:28:30.301 [2024-06-08 00:53:48.301839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef1eb0 (9): Bad file descriptor 00:28:30.301 [2024-06-08 00:53:48.301874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:30.301 [2024-06-08 00:53:48.301882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:30.301 [2024-06-08 00:53:48.301889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:30.301 [2024-06-08 00:53:48.301928] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.301 [2024-06-08 00:53:48.306900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.306916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.306930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.306938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.306948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.306956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.306965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.306973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.306982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.306990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.301 [2024-06-08 00:53:48.307383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.301 [2024-06-08 00:53:48.307391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.307988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.307998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.308006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.308016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.308023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.308031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e14d80 is same with the state(5) to be set 00:28:30.302 [2024-06-08 00:53:48.309374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.309389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.302 [2024-06-08 00:53:48.309406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.302 [2024-06-08 00:53:48.309416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.303 [2024-06-08 00:53:48.309948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.303 [2024-06-08 00:53:48.309957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.309965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.309975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.309983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.309994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.310508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.310517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e932f0 is same with the state(5) to be set 00:28:30.304 [2024-06-08 00:53:48.311854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.311866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.311877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.311888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.311898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.311906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.311916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.311923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.311933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.311941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.311951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.304 [2024-06-08 00:53:48.311959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.304 [2024-06-08 00:53:48.311969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.311977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.311988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.311996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.305 [2024-06-08 00:53:48.312594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.305 [2024-06-08 00:53:48.312603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.312976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.312985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44f00 is same with the state(5) to be set 00:28:30.306 [2024-06-08 00:53:48.314319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.306 [2024-06-08 00:53:48.314653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.306 [2024-06-08 00:53:48.314663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.314982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.314992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.307 [2024-06-08 00:53:48.315325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.307 [2024-06-08 00:53:48.315333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.315342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.315350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.315359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.315367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.315377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.315385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.315395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.315408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.315418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.315426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.315435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.315444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.315455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.315461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.315470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46420 is same with the state(5) to be set 00:28:30.308 [2024-06-08 00:53:48.316797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.316987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.316997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.317004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.317014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.317022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.317032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.317039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.317048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.317056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.317065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.317073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.317082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.317090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.317100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.317107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.317117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.317125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.317134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.317141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.317151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.308 [2024-06-08 00:53:48.317158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.308 [2024-06-08 00:53:48.317168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.309 [2024-06-08 00:53:48.317858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.309 [2024-06-08 00:53:48.317868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.317875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.317886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.317894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.317903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.317910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.317919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d30 is same with the state(5) to be set 00:28:30.310 [2024-06-08 00:53:48.319486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.319985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.319995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.320002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.320012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.320019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.320029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.320037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.320046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.320054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.320064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.320072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.320081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.320090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.320100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.320107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.320117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.310 [2024-06-08 00:53:48.320124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.310 [2024-06-08 00:53:48.320134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.311 [2024-06-08 00:53:48.320618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.311 [2024-06-08 00:53:48.320626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0c1e0 is same with the state(5) to be set 00:28:30.311 [2024-06-08 00:53:48.323202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:30.311 [2024-06-08 00:53:48.323225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:30.311 [2024-06-08 00:53:48.323235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:30.311 [2024-06-08 00:53:48.323245] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:30.311 [2024-06-08 00:53:48.323328] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:30.311 [2024-06-08 00:53:48.323344] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:30.311 [2024-06-08 00:53:48.323425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:30.311 task offset: 26368 on job bdev=Nvme4n1 fails 00:28:30.311 00:28:30.311 Latency(us) 00:28:30.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.311 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.311 Job: Nvme1n1 ended in about 0.94 seconds with error 00:28:30.311 Verification LBA range: start 0x0 length 0x400 00:28:30.311 Nvme1n1 : 0.94 203.66 12.73 67.89 0.00 233037.23 18786.99 246415.36 00:28:30.311 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.311 Job: Nvme2n1 ended in about 0.95 seconds with error 00:28:30.311 Verification LBA range: start 0x0 length 0x400 00:28:30.311 Nvme2n1 : 0.95 134.37 8.40 67.18 0.00 307811.84 22063.79 309329.92 00:28:30.311 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.311 Job: Nvme3n1 ended in about 0.96 seconds with error 00:28:30.311 Verification LBA range: start 0x0 length 0x400 00:28:30.311 Nvme3n1 : 0.96 201.03 12.56 67.01 0.00 226584.75 10048.85 221074.77 00:28:30.311 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.311 Job: Nvme4n1 ended in about 0.91 seconds with error 00:28:30.311 Verification LBA range: start 0x0 length 0x400 00:28:30.311 Nvme4n1 : 0.91 210.65 13.17 70.22 0.00 210965.71 3904.85 248162.99 00:28:30.311 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.311 Job: Nvme5n1 ended in about 0.91 seconds with error 00:28:30.311 Verification LBA range: start 0x0 length 0x400 00:28:30.311 Nvme5n1 : 0.91 210.37 13.15 70.12 0.00 206571.52 5024.43 246415.36 00:28:30.311 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.311 Job: Nvme6n1 ended in about 0.96 seconds with error 00:28:30.311 Verification LBA range: start 0x0 length 0x400 00:28:30.311 Nvme6n1 : 0.96 133.67 8.35 66.84 0.00 284078.93 26105.17 272629.76 00:28:30.311 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.311 Job: Nvme7n1 ended in about 0.96 seconds with error 00:28:30.312 Verification LBA range: start 0x0 length 0x400 00:28:30.312 Nvme7n1 : 0.96 133.33 8.33 66.66 0.00 278543.93 23046.83 253405.87 00:28:30.312 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.312 Job: Nvme8n1 ended in about 0.93 seconds with error 00:28:30.312 Verification LBA range: start 0x0 length 0x400 00:28:30.312 Nvme8n1 : 0.93 205.93 12.87 68.64 0.00 197235.63 21189.97 228939.09 00:28:30.312 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.312 Job: Nvme9n1 ended in about 0.96 seconds with error 00:28:30.312 Verification LBA range: start 0x0 length 0x400 00:28:30.312 Nvme9n1 : 0.96 132.99 8.31 66.50 0.00 266735.79 23265.28 295348.91 00:28:30.312 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.312 Job: Nvme10n1 ended in about 0.97 seconds with error 00:28:30.312 Verification LBA range: start 0x0 length 0x400 00:28:30.312 Nvme10n1 : 0.97 132.62 8.29 66.31 0.00 261507.13 19770.03 249910.61 00:28:30.312 =================================================================================================================== 00:28:30.312 Total : 1698.62 106.16 677.37 0.00 242674.63 3904.85 309329.92 00:28:30.312 [2024-06-08 00:53:48.354842] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:30.312 [2024-06-08 00:53:48.354893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:30.312 [2024-06-08 00:53:48.355323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.312 [2024-06-08 00:53:48.355343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f175a0 with addr=10.0.0.2, port=4420 00:28:30.312 [2024-06-08 00:53:48.355354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f175a0 is same with the state(5) to be set 00:28:30.312 [2024-06-08 00:53:48.355556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.312 [2024-06-08 00:53:48.355567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d5b790 with addr=10.0.0.2, port=4420 00:28:30.312 [2024-06-08 00:53:48.355574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b790 is same with the state(5) to be set 00:28:30.312 [2024-06-08 00:53:48.355931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.312 [2024-06-08 00:53:48.355942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1852610 with addr=10.0.0.2, port=4420 00:28:30.312 [2024-06-08 00:53:48.355950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852610 is same with the state(5) to be set 00:28:30.312 [2024-06-08 00:53:48.356359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.312 [2024-06-08 00:53:48.356370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f16a00 with addr=10.0.0.2, port=4420 00:28:30.312 [2024-06-08 00:53:48.356377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f16a00 is same with the state(5) to be set 00:28:30.312 [2024-06-08 00:53:48.357977] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.312 [2024-06-08 00:53:48.358000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:30.312 [2024-06-08 00:53:48.358009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:30.312 [2024-06-08 00:53:48.358018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:30.312 [2024-06-08 00:53:48.358503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.312 [2024-06-08 00:53:48.358517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef1cd0 with addr=10.0.0.2, port=4420 00:28:30.312 [2024-06-08 00:53:48.358525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef1cd0 is same with the state(5) to be set 00:28:30.312 [2024-06-08 00:53:48.358906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.312 [2024-06-08 00:53:48.358917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f028d0 with addr=10.0.0.2, port=4420 00:28:30.312 [2024-06-08 00:53:48.358924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f028d0 is same with the state(5) to be set 00:28:30.312 [2024-06-08 00:53:48.358937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f175a0 (9): Bad file descriptor 00:28:30.312 [2024-06-08 00:53:48.358948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5b790 (9): Bad file descriptor 00:28:30.312 [2024-06-08 00:53:48.358958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1852610 (9): Bad file descriptor 00:28:30.312 [2024-06-08 00:53:48.358966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f16a00 (9): Bad file descriptor 00:28:30.312 [2024-06-08 00:53:48.359002] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:30.312 [2024-06-08 00:53:48.359014] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:30.312 [2024-06-08 00:53:48.359025] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:30.312 [2024-06-08 00:53:48.359036] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:30.312 [2024-06-08 00:53:48.359536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.312 [2024-06-08 00:53:48.359551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4d140 with addr=10.0.0.2, port=4420 00:28:30.312 [2024-06-08 00:53:48.359559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d140 is same with the state(5) to be set 00:28:30.312 [2024-06-08 00:53:48.359744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.312 [2024-06-08 00:53:48.359754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17930 with addr=10.0.0.2, port=4420 00:28:30.312 [2024-06-08 00:53:48.359761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17930 is same with the state(5) to be set 00:28:30.312 [2024-06-08 00:53:48.360020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.312 [2024-06-08 00:53:48.360030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7bf40 with addr=10.0.0.2, port=4420 00:28:30.312 [2024-06-08 00:53:48.360037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7bf40 is same with the state(5) to be set 00:28:30.312 [2024-06-08 00:53:48.360245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.312 [2024-06-08 00:53:48.360256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef1eb0 with addr=10.0.0.2, port=4420 00:28:30.312 [2024-06-08 00:53:48.360263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef1eb0 is same with the state(5) to be set 00:28:30.312 [2024-06-08 00:53:48.360272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef1cd0 (9): Bad file descriptor 00:28:30.312 [2024-06-08 00:53:48.360284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f028d0 (9): Bad file descriptor 00:28:30.312 [2024-06-08 00:53:48.360293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:30.312 [2024-06-08 00:53:48.360299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:30.312 [2024-06-08 00:53:48.360307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:30.312 [2024-06-08 00:53:48.360319] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:30.312 [2024-06-08 00:53:48.360325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:30.312 [2024-06-08 00:53:48.360332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:30.312 [2024-06-08 00:53:48.360343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:30.312 [2024-06-08 00:53:48.360350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:30.312 [2024-06-08 00:53:48.360356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:30.312 [2024-06-08 00:53:48.360366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:30.312 [2024-06-08 00:53:48.360373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:30.312 [2024-06-08 00:53:48.360380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:30.312 [2024-06-08 00:53:48.360454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.312 [2024-06-08 00:53:48.360462] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.312 [2024-06-08 00:53:48.360468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.312 [2024-06-08 00:53:48.360475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.312 [2024-06-08 00:53:48.360482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4d140 (9): Bad file descriptor 00:28:30.312 [2024-06-08 00:53:48.360491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17930 (9): Bad file descriptor 00:28:30.312 [2024-06-08 00:53:48.360500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7bf40 (9): Bad file descriptor 00:28:30.312 [2024-06-08 00:53:48.360509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef1eb0 (9): Bad file descriptor 00:28:30.312 [2024-06-08 00:53:48.360517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:30.312 [2024-06-08 00:53:48.360524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:30.312 [2024-06-08 00:53:48.360530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:30.312 [2024-06-08 00:53:48.360539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:30.312 [2024-06-08 00:53:48.360546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:30.312 [2024-06-08 00:53:48.360553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:30.312 [2024-06-08 00:53:48.360581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.312 [2024-06-08 00:53:48.360588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.313 [2024-06-08 00:53:48.360594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.313 [2024-06-08 00:53:48.360604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.313 [2024-06-08 00:53:48.360611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.313 [2024-06-08 00:53:48.360621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:30.313 [2024-06-08 00:53:48.360627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:30.313 [2024-06-08 00:53:48.360633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:30.313 [2024-06-08 00:53:48.360643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:30.313 [2024-06-08 00:53:48.360650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:30.313 [2024-06-08 00:53:48.360656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:30.313 [2024-06-08 00:53:48.360666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:30.313 [2024-06-08 00:53:48.360672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:30.313 [2024-06-08 00:53:48.360679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:30.313 [2024-06-08 00:53:48.360710] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.313 [2024-06-08 00:53:48.360718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.313 [2024-06-08 00:53:48.360724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.313 [2024-06-08 00:53:48.360731] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.313 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:30.313 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 549435 00:28:31.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (549435) - No such process 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.256 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:31.518 rmmod nvme_tcp 00:28:31.518 rmmod nvme_fabrics 00:28:31.518 rmmod nvme_keyring 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.518 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.434 00:53:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:33.434 00:28:33.434 real 0m7.869s 00:28:33.434 user 0m19.407s 00:28:33.434 sys 0m1.247s 00:28:33.434 00:53:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:33.434 00:53:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.434 ************************************ 00:28:33.434 END TEST nvmf_shutdown_tc3 00:28:33.434 ************************************ 00:28:33.434 00:53:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:33.434 00:28:33.434 real 0m32.511s 00:28:33.434 user 1m16.890s 00:28:33.434 sys 0m9.287s 00:28:33.434 00:53:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:33.695 00:53:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:33.695 ************************************ 00:28:33.695 END TEST nvmf_shutdown 00:28:33.695 ************************************ 00:28:33.696 00:53:51 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:28:33.696 00:53:51 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:33.696 00:53:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.696 00:53:51 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:28:33.696 00:53:51 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:33.696 00:53:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.696 00:53:51 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:28:33.696 00:53:51 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:33.696 00:53:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:33.696 00:53:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:33.696 00:53:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.696 ************************************ 00:28:33.696 START TEST nvmf_multicontroller 00:28:33.696 ************************************ 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:33.696 * Looking for test storage... 00:28:33.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.696 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:33.958 00:53:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.597 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:40.598 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:40.598 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:40.598 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:40.598 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.598 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.878 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.878 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.878 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:40.878 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:40.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:28:40.878 00:28:40.878 --- 10.0.0.2 ping statistics --- 00:28:40.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.878 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:40.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:28:40.878 00:28:40.878 --- 10.0.0.1 ping statistics --- 00:28:40.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.878 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:40.878 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.140 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=554308 00:28:41.140 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 554308 00:28:41.140 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:41.140 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 554308 ']' 00:28:41.140 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.140 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:41.140 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.140 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:41.140 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.140 [2024-06-08 00:53:59.225378] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:41.140 [2024-06-08 00:53:59.225470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.140 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.140 [2024-06-08 00:53:59.313659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:41.140 [2024-06-08 00:53:59.407903] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.140 [2024-06-08 00:53:59.407963] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.140 [2024-06-08 00:53:59.407971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.140 [2024-06-08 00:53:59.407978] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.140 [2024-06-08 00:53:59.407983] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.140 [2024-06-08 00:53:59.408121] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.140 [2024-06-08 00:53:59.408288] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.140 [2024-06-08 00:53:59.408289] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.712 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:41.712 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:28:41.712 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:41.712 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:41.712 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 [2024-06-08 00:54:00.033812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 Malloc0 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 [2024-06-08 00:54:00.103625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 [2024-06-08 00:54:00.115580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 Malloc1 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=554539 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 554539 /var/tmp/bdevperf.sock 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 554539 ']' 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:41.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:41.974 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.917 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:42.917 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:28:42.917 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:42.917 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.917 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.917 NVMe0n1 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.917 1 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.917 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.917 request: 00:28:42.917 { 00:28:42.917 "name": "NVMe0", 00:28:42.918 "trtype": "tcp", 00:28:42.918 "traddr": "10.0.0.2", 00:28:42.918 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:42.918 "hostaddr": "10.0.0.2", 00:28:42.918 "hostsvcid": "60000", 00:28:42.918 "adrfam": "ipv4", 00:28:42.918 "trsvcid": "4420", 00:28:42.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.918 "method": "bdev_nvme_attach_controller", 00:28:42.918 "req_id": 1 00:28:42.918 } 00:28:42.918 Got JSON-RPC error response 00:28:42.918 response: 00:28:42.918 { 00:28:42.918 "code": -114, 00:28:42.918 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:42.918 } 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.918 request: 00:28:42.918 { 00:28:42.918 "name": "NVMe0", 00:28:42.918 "trtype": "tcp", 00:28:42.918 "traddr": "10.0.0.2", 00:28:42.918 "hostaddr": "10.0.0.2", 00:28:42.918 "hostsvcid": "60000", 00:28:42.918 "adrfam": "ipv4", 00:28:42.918 "trsvcid": "4420", 00:28:42.918 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:42.918 "method": "bdev_nvme_attach_controller", 00:28:42.918 "req_id": 1 00:28:42.918 } 00:28:42.918 Got JSON-RPC error response 00:28:42.918 response: 00:28:42.918 { 00:28:42.918 "code": -114, 00:28:42.918 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:42.918 } 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.918 request: 00:28:42.918 { 00:28:42.918 "name": "NVMe0", 00:28:42.918 "trtype": "tcp", 00:28:42.918 "traddr": "10.0.0.2", 00:28:42.918 "hostaddr": "10.0.0.2", 00:28:42.918 "hostsvcid": "60000", 00:28:42.918 "adrfam": "ipv4", 00:28:42.918 "trsvcid": "4420", 00:28:42.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.918 "multipath": "disable", 00:28:42.918 "method": "bdev_nvme_attach_controller", 00:28:42.918 "req_id": 1 00:28:42.918 } 00:28:42.918 Got JSON-RPC error response 00:28:42.918 response: 00:28:42.918 { 00:28:42.918 "code": -114, 00:28:42.918 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:42.918 } 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.918 request: 00:28:42.918 { 00:28:42.918 "name": "NVMe0", 00:28:42.918 "trtype": "tcp", 00:28:42.918 "traddr": "10.0.0.2", 00:28:42.918 "hostaddr": "10.0.0.2", 00:28:42.918 "hostsvcid": "60000", 00:28:42.918 "adrfam": "ipv4", 00:28:42.918 "trsvcid": "4420", 00:28:42.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.918 "multipath": "failover", 00:28:42.918 "method": "bdev_nvme_attach_controller", 00:28:42.918 "req_id": 1 00:28:42.918 } 00:28:42.918 Got JSON-RPC error response 00:28:42.918 response: 00:28:42.918 { 00:28:42.918 "code": -114, 00:28:42.918 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:42.918 } 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:42.918 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.179 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.179 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:43.179 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:43.180 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.180 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.180 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.180 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:43.180 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:44.563 0 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 554539 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 554539 ']' 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 554539 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 554539 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 554539' 00:28:44.563 killing process with pid 554539 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 554539 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 554539 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.563 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:28:44.564 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:44.564 [2024-06-08 00:54:00.234246] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:44.564 [2024-06-08 00:54:00.234302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid554539 ] 00:28:44.564 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.564 [2024-06-08 00:54:00.293808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.564 [2024-06-08 00:54:00.358351] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.564 [2024-06-08 00:54:01.384047] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 210fe96a-a7b0-40c4-a065-2fd9e4032af2 already exists 00:28:44.564 [2024-06-08 00:54:01.384078] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:210fe96a-a7b0-40c4-a065-2fd9e4032af2 alias for bdev NVMe1n1 00:28:44.564 [2024-06-08 00:54:01.384088] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:44.564 Running I/O for 1 seconds... 00:28:44.564 00:28:44.564 Latency(us) 00:28:44.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.564 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:44.564 NVMe0n1 : 1.00 19807.51 77.37 0.00 0.00 6442.80 4287.15 14964.05 00:28:44.564 =================================================================================================================== 00:28:44.564 Total : 19807.51 77.37 0.00 0.00 6442.80 4287.15 14964.05 00:28:44.564 Received shutdown signal, test time was about 1.000000 seconds 00:28:44.564 00:28:44.564 Latency(us) 00:28:44.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.564 =================================================================================================================== 00:28:44.564 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.564 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:44.564 rmmod nvme_tcp 00:28:44.564 rmmod nvme_fabrics 00:28:44.564 rmmod nvme_keyring 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 554308 ']' 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 554308 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 554308 ']' 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 554308 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:44.564 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 554308 00:28:44.824 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:44.824 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:44.824 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 554308' 00:28:44.824 killing process with pid 554308 00:28:44.824 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 554308 00:28:44.824 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 554308 00:28:44.824 00:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:44.824 00:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:44.824 00:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:44.824 00:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.824 00:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.824 00:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.824 00:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.824 00:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.370 00:54:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:47.370 00:28:47.370 real 0m13.239s 00:28:47.370 user 0m15.720s 00:28:47.370 sys 0m6.016s 00:28:47.370 00:54:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:47.370 00:54:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.370 ************************************ 00:28:47.370 END TEST nvmf_multicontroller 00:28:47.370 ************************************ 00:28:47.370 00:54:05 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:47.370 00:54:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:47.370 00:54:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:47.370 00:54:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:47.370 ************************************ 00:28:47.370 START TEST nvmf_aer 00:28:47.370 ************************************ 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:47.370 * Looking for test storage... 00:28:47.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:47.370 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:47.371 00:54:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:53.960 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:53.960 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:53.960 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.960 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:53.961 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.961 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:54.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:28:54.222 00:28:54.222 --- 10.0.0.2 ping statistics --- 00:28:54.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.222 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:28:54.222 00:28:54.222 --- 10.0.0.1 ping statistics --- 00:28:54.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.222 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=559763 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 559763 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 559763 ']' 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:54.222 00:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:54.483 [2024-06-08 00:54:12.514824] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:54.483 [2024-06-08 00:54:12.514892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.484 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.484 [2024-06-08 00:54:12.585339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.484 [2024-06-08 00:54:12.660460] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.484 [2024-06-08 00:54:12.660497] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.484 [2024-06-08 00:54:12.660505] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.484 [2024-06-08 00:54:12.660512] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.484 [2024-06-08 00:54:12.660517] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.484 [2024-06-08 00:54:12.660664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.484 [2024-06-08 00:54:12.660780] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.484 [2024-06-08 00:54:12.660937] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.484 [2024-06-08 00:54:12.660938] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.054 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:55.054 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:28:55.054 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:55.054 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:55.054 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.314 [2024-06-08 00:54:13.348038] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.314 Malloc0 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.314 [2024-06-08 00:54:13.407376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.314 [ 00:28:55.314 { 00:28:55.314 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:55.314 "subtype": "Discovery", 00:28:55.314 "listen_addresses": [], 00:28:55.314 "allow_any_host": true, 00:28:55.314 "hosts": [] 00:28:55.314 }, 00:28:55.314 { 00:28:55.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.314 "subtype": "NVMe", 00:28:55.314 "listen_addresses": [ 00:28:55.314 { 00:28:55.314 "trtype": "TCP", 00:28:55.314 "adrfam": "IPv4", 00:28:55.314 "traddr": "10.0.0.2", 00:28:55.314 "trsvcid": "4420" 00:28:55.314 } 00:28:55.314 ], 00:28:55.314 "allow_any_host": true, 00:28:55.314 "hosts": [], 00:28:55.314 "serial_number": "SPDK00000000000001", 00:28:55.314 "model_number": "SPDK bdev Controller", 00:28:55.314 "max_namespaces": 2, 00:28:55.314 "min_cntlid": 1, 00:28:55.314 "max_cntlid": 65519, 00:28:55.314 "namespaces": [ 00:28:55.314 { 00:28:55.314 "nsid": 1, 00:28:55.314 "bdev_name": "Malloc0", 00:28:55.314 "name": "Malloc0", 00:28:55.314 "nguid": "18A84EB89D3B4BA1BA342DF1B0C6BB0C", 00:28:55.314 "uuid": "18a84eb8-9d3b-4ba1-ba34-2df1b0c6bb0c" 00:28:55.314 } 00:28:55.314 ] 00:28:55.314 } 00:28:55.314 ] 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=559818 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:28:55.314 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:28:55.314 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.574 Malloc1 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.574 [ 00:28:55.574 { 00:28:55.574 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:55.574 "subtype": "Discovery", 00:28:55.574 "listen_addresses": [], 00:28:55.574 "allow_any_host": true, 00:28:55.574 "hosts": [] 00:28:55.574 }, 00:28:55.574 { 00:28:55.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.574 "subtype": "NVMe", 00:28:55.574 "listen_addresses": [ 00:28:55.574 { 00:28:55.574 "trtype": "TCP", 00:28:55.574 "adrfam": "IPv4", 00:28:55.574 "traddr": "10.0.0.2", 00:28:55.574 "trsvcid": "4420" 00:28:55.574 } 00:28:55.574 ], 00:28:55.574 "allow_any_host": true, 00:28:55.574 "hosts": [], 00:28:55.574 "serial_number": "SPDK00000000000001", 00:28:55.574 "model_number": "SPDK bdev Controller", 00:28:55.574 "max_namespaces": 2, 00:28:55.574 "min_cntlid": 1, 00:28:55.574 "max_cntlid": 65519, 00:28:55.574 "namespaces": [ 00:28:55.574 { 00:28:55.574 "nsid": 1, 00:28:55.574 "bdev_name": "Malloc0", 00:28:55.574 "name": "Malloc0", 00:28:55.574 "nguid": "18A84EB89D3B4BA1BA342DF1B0C6BB0C", 00:28:55.574 "uuid": "18a84eb8-9d3b-4ba1-ba34-2df1b0c6bb0c" 00:28:55.574 }, 00:28:55.574 { 00:28:55.574 "nsid": 2, 00:28:55.574 "bdev_name": "Malloc1", 00:28:55.574 Asynchronous Event Request test 00:28:55.574 Attaching to 10.0.0.2 00:28:55.574 Attached to 10.0.0.2 00:28:55.574 Registering asynchronous event callbacks... 00:28:55.574 Starting namespace attribute notice tests for all controllers... 00:28:55.574 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:55.574 aer_cb - Changed Namespace 00:28:55.574 Cleaning up... 00:28:55.574 "name": "Malloc1", 00:28:55.574 "nguid": "F793762276844738B6192CF2A61B3B8D", 00:28:55.574 "uuid": "f7937622-7684-4738-b619-2cf2a61b3b8d" 00:28:55.574 } 00:28:55.574 ] 00:28:55.574 } 00:28:55.574 ] 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 559818 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:55.574 rmmod nvme_tcp 00:28:55.574 rmmod nvme_fabrics 00:28:55.574 rmmod nvme_keyring 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 559763 ']' 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 559763 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 559763 ']' 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 559763 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:55.574 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 559763 00:28:55.834 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:55.834 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:55.834 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 559763' 00:28:55.834 killing process with pid 559763 00:28:55.834 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 559763 00:28:55.834 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 559763 00:28:55.834 00:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:55.834 00:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:55.834 00:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:55.834 00:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.834 00:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:55.834 00:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.834 00:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.834 00:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.380 00:54:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:58.380 00:28:58.380 real 0m10.923s 00:28:58.380 user 0m7.555s 00:28:58.380 sys 0m5.722s 00:28:58.380 00:54:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:58.380 00:54:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:58.380 ************************************ 00:28:58.380 END TEST nvmf_aer 00:28:58.380 ************************************ 00:28:58.381 00:54:16 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:58.381 00:54:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:58.381 00:54:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:58.381 00:54:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:58.381 ************************************ 00:28:58.381 START TEST nvmf_async_init 00:28:58.381 ************************************ 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:58.381 * Looking for test storage... 00:28:58.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=00237aed3d764b0fa08ec6c60b0ba6a0 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:58.381 00:54:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.977 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:04.978 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:04.978 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:04.978 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:04.978 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.978 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.978 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.978 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.978 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:04.978 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.978 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.978 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.978 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:04.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:29:04.978 00:29:04.978 --- 10.0.0.2 ping statistics --- 00:29:04.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.978 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:29:04.978 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.473 ms 00:29:04.978 00:29:04.978 --- 10.0.0.1 ping statistics --- 00:29:04.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.978 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=564106 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 564106 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 564106 ']' 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:04.979 00:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:05.240 [2024-06-08 00:54:23.272621] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:05.240 [2024-06-08 00:54:23.272683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.240 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.240 [2024-06-08 00:54:23.342170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.240 [2024-06-08 00:54:23.415607] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.240 [2024-06-08 00:54:23.415643] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.240 [2024-06-08 00:54:23.415650] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.240 [2024-06-08 00:54:23.415657] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.240 [2024-06-08 00:54:23.415663] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.240 [2024-06-08 00:54:23.415681] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:05.810 [2024-06-08 00:54:24.070393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:05.810 null0 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.810 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 00237aed3d764b0fa08ec6c60b0ba6a0 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.072 [2024-06-08 00:54:24.126647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.072 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.334 nvme0n1 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.334 [ 00:29:06.334 { 00:29:06.334 "name": "nvme0n1", 00:29:06.334 "aliases": [ 00:29:06.334 "00237aed-3d76-4b0f-a08e-c6c60b0ba6a0" 00:29:06.334 ], 00:29:06.334 "product_name": "NVMe disk", 00:29:06.334 "block_size": 512, 00:29:06.334 "num_blocks": 2097152, 00:29:06.334 "uuid": "00237aed-3d76-4b0f-a08e-c6c60b0ba6a0", 00:29:06.334 "assigned_rate_limits": { 00:29:06.334 "rw_ios_per_sec": 0, 00:29:06.334 "rw_mbytes_per_sec": 0, 00:29:06.334 "r_mbytes_per_sec": 0, 00:29:06.334 "w_mbytes_per_sec": 0 00:29:06.334 }, 00:29:06.334 "claimed": false, 00:29:06.334 "zoned": false, 00:29:06.334 "supported_io_types": { 00:29:06.334 "read": true, 00:29:06.334 "write": true, 00:29:06.334 "unmap": false, 00:29:06.334 "write_zeroes": true, 00:29:06.334 "flush": true, 00:29:06.334 "reset": true, 00:29:06.334 "compare": true, 00:29:06.334 "compare_and_write": true, 00:29:06.334 "abort": true, 00:29:06.334 "nvme_admin": true, 00:29:06.334 "nvme_io": true 00:29:06.334 }, 00:29:06.334 "memory_domains": [ 00:29:06.334 { 00:29:06.334 "dma_device_id": "system", 00:29:06.334 "dma_device_type": 1 00:29:06.334 } 00:29:06.334 ], 00:29:06.334 "driver_specific": { 00:29:06.334 "nvme": [ 00:29:06.334 { 00:29:06.334 "trid": { 00:29:06.334 "trtype": "TCP", 00:29:06.334 "adrfam": "IPv4", 00:29:06.334 "traddr": "10.0.0.2", 00:29:06.334 "trsvcid": "4420", 00:29:06.334 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:06.334 }, 00:29:06.334 "ctrlr_data": { 00:29:06.334 "cntlid": 1, 00:29:06.334 "vendor_id": "0x8086", 00:29:06.334 "model_number": "SPDK bdev Controller", 00:29:06.334 "serial_number": "00000000000000000000", 00:29:06.334 "firmware_revision": "24.09", 00:29:06.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.334 "oacs": { 00:29:06.334 "security": 0, 00:29:06.334 "format": 0, 00:29:06.334 "firmware": 0, 00:29:06.334 "ns_manage": 0 00:29:06.334 }, 00:29:06.334 "multi_ctrlr": true, 00:29:06.334 "ana_reporting": false 00:29:06.334 }, 00:29:06.334 "vs": { 00:29:06.334 "nvme_version": "1.3" 00:29:06.334 }, 00:29:06.334 "ns_data": { 00:29:06.334 "id": 1, 00:29:06.334 "can_share": true 00:29:06.334 } 00:29:06.334 } 00:29:06.334 ], 00:29:06.334 "mp_policy": "active_passive" 00:29:06.334 } 00:29:06.334 } 00:29:06.334 ] 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.334 [2024-06-08 00:54:24.391184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:06.334 [2024-06-08 00:54:24.391244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103c210 (9): Bad file descriptor 00:29:06.334 [2024-06-08 00:54:24.525497] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.334 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.334 [ 00:29:06.334 { 00:29:06.334 "name": "nvme0n1", 00:29:06.334 "aliases": [ 00:29:06.334 "00237aed-3d76-4b0f-a08e-c6c60b0ba6a0" 00:29:06.334 ], 00:29:06.334 "product_name": "NVMe disk", 00:29:06.334 "block_size": 512, 00:29:06.334 "num_blocks": 2097152, 00:29:06.334 "uuid": "00237aed-3d76-4b0f-a08e-c6c60b0ba6a0", 00:29:06.334 "assigned_rate_limits": { 00:29:06.334 "rw_ios_per_sec": 0, 00:29:06.334 "rw_mbytes_per_sec": 0, 00:29:06.334 "r_mbytes_per_sec": 0, 00:29:06.334 "w_mbytes_per_sec": 0 00:29:06.334 }, 00:29:06.334 "claimed": false, 00:29:06.334 "zoned": false, 00:29:06.334 "supported_io_types": { 00:29:06.334 "read": true, 00:29:06.334 "write": true, 00:29:06.334 "unmap": false, 00:29:06.334 "write_zeroes": true, 00:29:06.334 "flush": true, 00:29:06.334 "reset": true, 00:29:06.334 "compare": true, 00:29:06.334 "compare_and_write": true, 00:29:06.334 "abort": true, 00:29:06.334 "nvme_admin": true, 00:29:06.334 "nvme_io": true 00:29:06.334 }, 00:29:06.334 "memory_domains": [ 00:29:06.334 { 00:29:06.334 "dma_device_id": "system", 00:29:06.334 "dma_device_type": 1 00:29:06.334 } 00:29:06.334 ], 00:29:06.334 "driver_specific": { 00:29:06.334 "nvme": [ 00:29:06.334 { 00:29:06.334 "trid": { 00:29:06.334 "trtype": "TCP", 00:29:06.334 "adrfam": "IPv4", 00:29:06.334 "traddr": "10.0.0.2", 00:29:06.334 "trsvcid": "4420", 00:29:06.334 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:06.334 }, 00:29:06.334 "ctrlr_data": { 00:29:06.334 "cntlid": 2, 00:29:06.334 "vendor_id": "0x8086", 00:29:06.334 "model_number": "SPDK bdev Controller", 00:29:06.334 "serial_number": "00000000000000000000", 00:29:06.334 "firmware_revision": "24.09", 00:29:06.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.334 "oacs": { 00:29:06.334 "security": 0, 00:29:06.334 "format": 0, 00:29:06.334 "firmware": 0, 00:29:06.334 "ns_manage": 0 00:29:06.334 }, 00:29:06.334 "multi_ctrlr": true, 00:29:06.335 "ana_reporting": false 00:29:06.335 }, 00:29:06.335 "vs": { 00:29:06.335 "nvme_version": "1.3" 00:29:06.335 }, 00:29:06.335 "ns_data": { 00:29:06.335 "id": 1, 00:29:06.335 "can_share": true 00:29:06.335 } 00:29:06.335 } 00:29:06.335 ], 00:29:06.335 "mp_policy": "active_passive" 00:29:06.335 } 00:29:06.335 } 00:29:06.335 ] 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.i9zgGp5TmT 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.i9zgGp5TmT 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.335 [2024-06-08 00:54:24.595821] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:06.335 [2024-06-08 00:54:24.595941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i9zgGp5TmT 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.335 [2024-06-08 00:54:24.607844] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i9zgGp5TmT 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.335 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.596 [2024-06-08 00:54:24.619882] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:06.596 [2024-06-08 00:54:24.619920] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:06.596 nvme0n1 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.596 [ 00:29:06.596 { 00:29:06.596 "name": "nvme0n1", 00:29:06.596 "aliases": [ 00:29:06.596 "00237aed-3d76-4b0f-a08e-c6c60b0ba6a0" 00:29:06.596 ], 00:29:06.596 "product_name": "NVMe disk", 00:29:06.596 "block_size": 512, 00:29:06.596 "num_blocks": 2097152, 00:29:06.596 "uuid": "00237aed-3d76-4b0f-a08e-c6c60b0ba6a0", 00:29:06.596 "assigned_rate_limits": { 00:29:06.596 "rw_ios_per_sec": 0, 00:29:06.596 "rw_mbytes_per_sec": 0, 00:29:06.596 "r_mbytes_per_sec": 0, 00:29:06.596 "w_mbytes_per_sec": 0 00:29:06.596 }, 00:29:06.596 "claimed": false, 00:29:06.596 "zoned": false, 00:29:06.596 "supported_io_types": { 00:29:06.596 "read": true, 00:29:06.596 "write": true, 00:29:06.596 "unmap": false, 00:29:06.596 "write_zeroes": true, 00:29:06.596 "flush": true, 00:29:06.596 "reset": true, 00:29:06.596 "compare": true, 00:29:06.596 "compare_and_write": true, 00:29:06.596 "abort": true, 00:29:06.596 "nvme_admin": true, 00:29:06.596 "nvme_io": true 00:29:06.596 }, 00:29:06.596 "memory_domains": [ 00:29:06.596 { 00:29:06.596 "dma_device_id": "system", 00:29:06.596 "dma_device_type": 1 00:29:06.596 } 00:29:06.596 ], 00:29:06.596 "driver_specific": { 00:29:06.596 "nvme": [ 00:29:06.596 { 00:29:06.596 "trid": { 00:29:06.596 "trtype": "TCP", 00:29:06.596 "adrfam": "IPv4", 00:29:06.596 "traddr": "10.0.0.2", 00:29:06.596 "trsvcid": "4421", 00:29:06.596 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:06.596 }, 00:29:06.596 "ctrlr_data": { 00:29:06.596 "cntlid": 3, 00:29:06.596 "vendor_id": "0x8086", 00:29:06.596 "model_number": "SPDK bdev Controller", 00:29:06.596 "serial_number": "00000000000000000000", 00:29:06.596 "firmware_revision": "24.09", 00:29:06.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.596 "oacs": { 00:29:06.596 "security": 0, 00:29:06.596 "format": 0, 00:29:06.596 "firmware": 0, 00:29:06.596 "ns_manage": 0 00:29:06.596 }, 00:29:06.596 "multi_ctrlr": true, 00:29:06.596 "ana_reporting": false 00:29:06.596 }, 00:29:06.596 "vs": { 00:29:06.596 "nvme_version": "1.3" 00:29:06.596 }, 00:29:06.596 "ns_data": { 00:29:06.596 "id": 1, 00:29:06.596 "can_share": true 00:29:06.596 } 00:29:06.596 } 00:29:06.596 ], 00:29:06.596 "mp_policy": "active_passive" 00:29:06.596 } 00:29:06.596 } 00:29:06.596 ] 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.i9zgGp5TmT 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:06.596 00:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:06.597 rmmod nvme_tcp 00:29:06.597 rmmod nvme_fabrics 00:29:06.597 rmmod nvme_keyring 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 564106 ']' 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 564106 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 564106 ']' 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 564106 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 564106 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 564106' 00:29:06.597 killing process with pid 564106 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 564106 00:29:06.597 [2024-06-08 00:54:24.868188] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:06.597 [2024-06-08 00:54:24.868215] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:06.597 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 564106 00:29:06.858 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:06.858 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:06.858 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:06.858 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:06.858 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:06.858 00:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.858 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:06.858 00:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.844 00:54:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:08.844 00:29:08.844 real 0m10.884s 00:29:08.844 user 0m3.894s 00:29:08.844 sys 0m5.433s 00:29:08.844 00:54:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:08.844 00:54:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.844 ************************************ 00:29:08.844 END TEST nvmf_async_init 00:29:08.844 ************************************ 00:29:08.844 00:54:27 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:08.844 00:54:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:08.844 00:54:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:08.844 00:54:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:09.105 ************************************ 00:29:09.105 START TEST dma 00:29:09.105 ************************************ 00:29:09.105 00:54:27 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:09.105 * Looking for test storage... 00:29:09.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:09.105 00:54:27 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.105 00:54:27 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.105 00:54:27 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.105 00:54:27 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.105 00:54:27 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.105 00:54:27 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.105 00:54:27 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.105 00:54:27 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:29:09.105 00:54:27 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:09.105 00:54:27 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:09.105 00:54:27 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:09.105 00:54:27 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:29:09.105 00:29:09.105 real 0m0.130s 00:29:09.105 user 0m0.062s 00:29:09.105 sys 0m0.077s 00:29:09.105 00:54:27 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:09.105 00:54:27 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:29:09.105 ************************************ 00:29:09.105 END TEST dma 00:29:09.105 ************************************ 00:29:09.105 00:54:27 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:09.105 00:54:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:09.105 00:54:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:09.105 00:54:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:09.105 ************************************ 00:29:09.105 START TEST nvmf_identify 00:29:09.105 ************************************ 00:29:09.106 00:54:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:09.367 * Looking for test storage... 00:29:09.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:09.367 00:54:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:15.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:15.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.957 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:15.958 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:15.958 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:15.958 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:16.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:29:16.219 00:29:16.219 --- 10.0.0.2 ping statistics --- 00:29:16.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.219 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:29:16.219 00:29:16.219 --- 10.0.0.1 ping statistics --- 00:29:16.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.219 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=568509 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 568509 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 568509 ']' 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:16.219 00:54:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:16.480 [2024-06-08 00:54:34.525071] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:16.480 [2024-06-08 00:54:34.525138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.480 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.480 [2024-06-08 00:54:34.595919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:16.480 [2024-06-08 00:54:34.672806] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.480 [2024-06-08 00:54:34.672844] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.480 [2024-06-08 00:54:34.672855] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.480 [2024-06-08 00:54:34.672861] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.480 [2024-06-08 00:54:34.672867] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.480 [2024-06-08 00:54:34.673010] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.480 [2024-06-08 00:54:34.673138] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.480 [2024-06-08 00:54:34.673294] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.480 [2024-06-08 00:54:34.673296] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.078 [2024-06-08 00:54:35.301836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.078 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.341 Malloc0 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.341 [2024-06-08 00:54:35.398729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.341 [ 00:29:17.341 { 00:29:17.341 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:17.341 "subtype": "Discovery", 00:29:17.341 "listen_addresses": [ 00:29:17.341 { 00:29:17.341 "trtype": "TCP", 00:29:17.341 "adrfam": "IPv4", 00:29:17.341 "traddr": "10.0.0.2", 00:29:17.341 "trsvcid": "4420" 00:29:17.341 } 00:29:17.341 ], 00:29:17.341 "allow_any_host": true, 00:29:17.341 "hosts": [] 00:29:17.341 }, 00:29:17.341 { 00:29:17.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:17.341 "subtype": "NVMe", 00:29:17.341 "listen_addresses": [ 00:29:17.341 { 00:29:17.341 "trtype": "TCP", 00:29:17.341 "adrfam": "IPv4", 00:29:17.341 "traddr": "10.0.0.2", 00:29:17.341 "trsvcid": "4420" 00:29:17.341 } 00:29:17.341 ], 00:29:17.341 "allow_any_host": true, 00:29:17.341 "hosts": [], 00:29:17.341 "serial_number": "SPDK00000000000001", 00:29:17.341 "model_number": "SPDK bdev Controller", 00:29:17.341 "max_namespaces": 32, 00:29:17.341 "min_cntlid": 1, 00:29:17.341 "max_cntlid": 65519, 00:29:17.341 "namespaces": [ 00:29:17.341 { 00:29:17.341 "nsid": 1, 00:29:17.341 "bdev_name": "Malloc0", 00:29:17.341 "name": "Malloc0", 00:29:17.341 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:17.341 "eui64": "ABCDEF0123456789", 00:29:17.341 "uuid": "1d9fb220-90d6-49a5-b4ba-6747dbed07a3" 00:29:17.341 } 00:29:17.341 ] 00:29:17.341 } 00:29:17.341 ] 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.341 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:17.341 [2024-06-08 00:54:35.459369] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:17.341 [2024-06-08 00:54:35.459436] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568713 ] 00:29:17.341 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.341 [2024-06-08 00:54:35.493064] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:17.341 [2024-06-08 00:54:35.493104] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:17.341 [2024-06-08 00:54:35.493110] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:17.341 [2024-06-08 00:54:35.493121] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:17.341 [2024-06-08 00:54:35.493129] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:17.341 [2024-06-08 00:54:35.496443] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:17.341 [2024-06-08 00:54:35.496471] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa93ec0 0 00:29:17.341 [2024-06-08 00:54:35.504408] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:17.341 [2024-06-08 00:54:35.504423] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:17.341 [2024-06-08 00:54:35.504429] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:17.341 [2024-06-08 00:54:35.504432] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:17.341 [2024-06-08 00:54:35.504466] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.341 [2024-06-08 00:54:35.504472] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.341 [2024-06-08 00:54:35.504477] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa93ec0) 00:29:17.341 [2024-06-08 00:54:35.504490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:17.341 [2024-06-08 00:54:35.504506] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16df0, cid 0, qid 0 00:29:17.341 [2024-06-08 00:54:35.512414] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.341 [2024-06-08 00:54:35.512424] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.341 [2024-06-08 00:54:35.512427] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.341 [2024-06-08 00:54:35.512432] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb16df0) on tqpair=0xa93ec0 00:29:17.341 [2024-06-08 00:54:35.512441] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:17.341 [2024-06-08 00:54:35.512448] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:17.341 [2024-06-08 00:54:35.512453] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:17.341 [2024-06-08 00:54:35.512467] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.341 [2024-06-08 00:54:35.512471] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.341 [2024-06-08 00:54:35.512475] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa93ec0) 00:29:17.341 [2024-06-08 00:54:35.512482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.341 [2024-06-08 00:54:35.512495] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16df0, cid 0, qid 0 00:29:17.341 [2024-06-08 00:54:35.512734] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.341 [2024-06-08 00:54:35.512740] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.341 [2024-06-08 00:54:35.512744] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.341 [2024-06-08 00:54:35.512748] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb16df0) on tqpair=0xa93ec0 00:29:17.341 [2024-06-08 00:54:35.512753] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:17.341 [2024-06-08 00:54:35.512760] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:17.341 [2024-06-08 00:54:35.512766] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.341 [2024-06-08 00:54:35.512770] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.341 [2024-06-08 00:54:35.512774] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa93ec0) 00:29:17.341 [2024-06-08 00:54:35.512781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.341 [2024-06-08 00:54:35.512791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16df0, cid 0, qid 0 00:29:17.341 [2024-06-08 00:54:35.512983] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.342 [2024-06-08 00:54:35.512990] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.342 [2024-06-08 00:54:35.512993] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.512997] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb16df0) on tqpair=0xa93ec0 00:29:17.342 [2024-06-08 00:54:35.513002] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:17.342 [2024-06-08 00:54:35.513010] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:17.342 [2024-06-08 00:54:35.513016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513020] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513023] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa93ec0) 00:29:17.342 [2024-06-08 00:54:35.513030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.342 [2024-06-08 00:54:35.513040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16df0, cid 0, qid 0 00:29:17.342 [2024-06-08 00:54:35.513254] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.342 [2024-06-08 00:54:35.513261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.342 [2024-06-08 00:54:35.513264] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513268] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb16df0) on tqpair=0xa93ec0 00:29:17.342 [2024-06-08 00:54:35.513273] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:17.342 [2024-06-08 00:54:35.513282] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513286] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513291] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa93ec0) 00:29:17.342 [2024-06-08 00:54:35.513298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.342 [2024-06-08 00:54:35.513308] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16df0, cid 0, qid 0 00:29:17.342 [2024-06-08 00:54:35.513536] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.342 [2024-06-08 00:54:35.513543] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.342 [2024-06-08 00:54:35.513547] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513551] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb16df0) on tqpair=0xa93ec0 00:29:17.342 [2024-06-08 00:54:35.513555] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:17.342 [2024-06-08 00:54:35.513560] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:17.342 [2024-06-08 00:54:35.513567] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:17.342 [2024-06-08 00:54:35.513672] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:17.342 [2024-06-08 00:54:35.513677] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:17.342 [2024-06-08 00:54:35.513684] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513688] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513692] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa93ec0) 00:29:17.342 [2024-06-08 00:54:35.513698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.342 [2024-06-08 00:54:35.513708] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16df0, cid 0, qid 0 00:29:17.342 [2024-06-08 00:54:35.513929] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.342 [2024-06-08 00:54:35.513936] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.342 [2024-06-08 00:54:35.513939] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb16df0) on tqpair=0xa93ec0 00:29:17.342 [2024-06-08 00:54:35.513947] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:17.342 [2024-06-08 00:54:35.513956] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513960] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.513963] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa93ec0) 00:29:17.342 [2024-06-08 00:54:35.513970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.342 [2024-06-08 00:54:35.513979] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16df0, cid 0, qid 0 00:29:17.342 [2024-06-08 00:54:35.514211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.342 [2024-06-08 00:54:35.514217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.342 [2024-06-08 00:54:35.514220] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.514224] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb16df0) on tqpair=0xa93ec0 00:29:17.342 [2024-06-08 00:54:35.514228] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:17.342 [2024-06-08 00:54:35.514233] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:17.342 [2024-06-08 00:54:35.514243] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:17.342 [2024-06-08 00:54:35.514251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:17.342 [2024-06-08 00:54:35.514260] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.514263] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa93ec0) 00:29:17.342 [2024-06-08 00:54:35.514270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.342 [2024-06-08 00:54:35.514280] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16df0, cid 0, qid 0 00:29:17.342 [2024-06-08 00:54:35.514504] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.342 [2024-06-08 00:54:35.514510] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.342 [2024-06-08 00:54:35.514514] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.514518] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa93ec0): datao=0, datal=4096, cccid=0 00:29:17.342 [2024-06-08 00:54:35.514522] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb16df0) on tqpair(0xa93ec0): expected_datao=0, payload_size=4096 00:29:17.342 [2024-06-08 00:54:35.514527] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.514570] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.514576] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.514770] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.342 [2024-06-08 00:54:35.514776] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.342 [2024-06-08 00:54:35.514779] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.514783] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb16df0) on tqpair=0xa93ec0 00:29:17.342 [2024-06-08 00:54:35.514791] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:17.342 [2024-06-08 00:54:35.514795] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:17.342 [2024-06-08 00:54:35.514800] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:17.342 [2024-06-08 00:54:35.514804] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:17.342 [2024-06-08 00:54:35.514809] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:17.342 [2024-06-08 00:54:35.514813] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:17.342 [2024-06-08 00:54:35.514824] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:17.342 [2024-06-08 00:54:35.514833] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.514837] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.514840] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa93ec0) 00:29:17.342 [2024-06-08 00:54:35.514847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:17.342 [2024-06-08 00:54:35.514858] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16df0, cid 0, qid 0 00:29:17.342 [2024-06-08 00:54:35.515061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.342 [2024-06-08 00:54:35.515067] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.342 [2024-06-08 00:54:35.515073] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.515077] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb16df0) on tqpair=0xa93ec0 00:29:17.342 [2024-06-08 00:54:35.515087] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.515091] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.515095] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa93ec0) 00:29:17.342 [2024-06-08 00:54:35.515101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.342 [2024-06-08 00:54:35.515107] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.515111] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.515114] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa93ec0) 00:29:17.342 [2024-06-08 00:54:35.515120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.342 [2024-06-08 00:54:35.515126] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.515130] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.515133] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa93ec0) 00:29:17.342 [2024-06-08 00:54:35.515139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.342 [2024-06-08 00:54:35.515145] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.342 [2024-06-08 00:54:35.515148] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.515152] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.343 [2024-06-08 00:54:35.515157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.343 [2024-06-08 00:54:35.515162] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:17.343 [2024-06-08 00:54:35.515170] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:17.343 [2024-06-08 00:54:35.515176] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.515180] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa93ec0) 00:29:17.343 [2024-06-08 00:54:35.515186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.343 [2024-06-08 00:54:35.515197] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16df0, cid 0, qid 0 00:29:17.343 [2024-06-08 00:54:35.515203] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb16f50, cid 1, qid 0 00:29:17.343 [2024-06-08 00:54:35.515207] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb170b0, cid 2, qid 0 00:29:17.343 [2024-06-08 00:54:35.515212] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.343 [2024-06-08 00:54:35.515216] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17370, cid 4, qid 0 00:29:17.343 [2024-06-08 00:54:35.515467] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.343 [2024-06-08 00:54:35.515474] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.343 [2024-06-08 00:54:35.515478] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.515481] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17370) on tqpair=0xa93ec0 00:29:17.343 [2024-06-08 00:54:35.515486] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:17.343 [2024-06-08 00:54:35.515495] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:17.343 [2024-06-08 00:54:35.515505] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.515509] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa93ec0) 00:29:17.343 [2024-06-08 00:54:35.515516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.343 [2024-06-08 00:54:35.515525] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17370, cid 4, qid 0 00:29:17.343 [2024-06-08 00:54:35.515762] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.343 [2024-06-08 00:54:35.515769] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.343 [2024-06-08 00:54:35.515772] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.515776] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa93ec0): datao=0, datal=4096, cccid=4 00:29:17.343 [2024-06-08 00:54:35.515781] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb17370) on tqpair(0xa93ec0): expected_datao=0, payload_size=4096 00:29:17.343 [2024-06-08 00:54:35.515785] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.515878] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.515881] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.559409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.343 [2024-06-08 00:54:35.559419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.343 [2024-06-08 00:54:35.559423] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.559427] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17370) on tqpair=0xa93ec0 00:29:17.343 [2024-06-08 00:54:35.559439] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:17.343 [2024-06-08 00:54:35.559464] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.559468] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa93ec0) 00:29:17.343 [2024-06-08 00:54:35.559476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.343 [2024-06-08 00:54:35.559483] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.559487] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.559490] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa93ec0) 00:29:17.343 [2024-06-08 00:54:35.559496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.343 [2024-06-08 00:54:35.559510] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17370, cid 4, qid 0 00:29:17.343 [2024-06-08 00:54:35.559515] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb174d0, cid 5, qid 0 00:29:17.343 [2024-06-08 00:54:35.559741] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.343 [2024-06-08 00:54:35.559749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.343 [2024-06-08 00:54:35.559753] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.559756] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa93ec0): datao=0, datal=1024, cccid=4 00:29:17.343 [2024-06-08 00:54:35.559761] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb17370) on tqpair(0xa93ec0): expected_datao=0, payload_size=1024 00:29:17.343 [2024-06-08 00:54:35.559765] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.559772] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.559775] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.559786] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.343 [2024-06-08 00:54:35.559792] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.343 [2024-06-08 00:54:35.559795] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.559799] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb174d0) on tqpair=0xa93ec0 00:29:17.343 [2024-06-08 00:54:35.601626] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.343 [2024-06-08 00:54:35.601637] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.343 [2024-06-08 00:54:35.601640] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.601644] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17370) on tqpair=0xa93ec0 00:29:17.343 [2024-06-08 00:54:35.601659] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.601664] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa93ec0) 00:29:17.343 [2024-06-08 00:54:35.601671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.343 [2024-06-08 00:54:35.601686] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17370, cid 4, qid 0 00:29:17.343 [2024-06-08 00:54:35.601943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.343 [2024-06-08 00:54:35.601950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.343 [2024-06-08 00:54:35.601953] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.601957] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa93ec0): datao=0, datal=3072, cccid=4 00:29:17.343 [2024-06-08 00:54:35.601961] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb17370) on tqpair(0xa93ec0): expected_datao=0, payload_size=3072 00:29:17.343 [2024-06-08 00:54:35.601965] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.601972] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.601976] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.602117] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.343 [2024-06-08 00:54:35.602124] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.343 [2024-06-08 00:54:35.602127] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.602131] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17370) on tqpair=0xa93ec0 00:29:17.343 [2024-06-08 00:54:35.602139] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.602143] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa93ec0) 00:29:17.343 [2024-06-08 00:54:35.602149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.343 [2024-06-08 00:54:35.602162] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17370, cid 4, qid 0 00:29:17.343 [2024-06-08 00:54:35.602430] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.343 [2024-06-08 00:54:35.602437] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.343 [2024-06-08 00:54:35.602440] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.602444] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa93ec0): datao=0, datal=8, cccid=4 00:29:17.343 [2024-06-08 00:54:35.602448] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb17370) on tqpair(0xa93ec0): expected_datao=0, payload_size=8 00:29:17.343 [2024-06-08 00:54:35.602452] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.602459] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.343 [2024-06-08 00:54:35.602462] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.609 [2024-06-08 00:54:35.647411] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.609 [2024-06-08 00:54:35.647424] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.609 [2024-06-08 00:54:35.647428] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.609 [2024-06-08 00:54:35.647432] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17370) on tqpair=0xa93ec0 00:29:17.609 ===================================================== 00:29:17.609 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:17.609 ===================================================== 00:29:17.609 Controller Capabilities/Features 00:29:17.609 ================================ 00:29:17.609 Vendor ID: 0000 00:29:17.609 Subsystem Vendor ID: 0000 00:29:17.609 Serial Number: .................... 00:29:17.609 Model Number: ........................................ 00:29:17.609 Firmware Version: 24.09 00:29:17.609 Recommended Arb Burst: 0 00:29:17.609 IEEE OUI Identifier: 00 00 00 00:29:17.609 Multi-path I/O 00:29:17.609 May have multiple subsystem ports: No 00:29:17.609 May have multiple controllers: No 00:29:17.609 Associated with SR-IOV VF: No 00:29:17.609 Max Data Transfer Size: 131072 00:29:17.609 Max Number of Namespaces: 0 00:29:17.609 Max Number of I/O Queues: 1024 00:29:17.609 NVMe Specification Version (VS): 1.3 00:29:17.609 NVMe Specification Version (Identify): 1.3 00:29:17.609 Maximum Queue Entries: 128 00:29:17.609 Contiguous Queues Required: Yes 00:29:17.609 Arbitration Mechanisms Supported 00:29:17.609 Weighted Round Robin: Not Supported 00:29:17.609 Vendor Specific: Not Supported 00:29:17.609 Reset Timeout: 15000 ms 00:29:17.609 Doorbell Stride: 4 bytes 00:29:17.609 NVM Subsystem Reset: Not Supported 00:29:17.609 Command Sets Supported 00:29:17.609 NVM Command Set: Supported 00:29:17.609 Boot Partition: Not Supported 00:29:17.609 Memory Page Size Minimum: 4096 bytes 00:29:17.609 Memory Page Size Maximum: 4096 bytes 00:29:17.609 Persistent Memory Region: Not Supported 00:29:17.609 Optional Asynchronous Events Supported 00:29:17.609 Namespace Attribute Notices: Not Supported 00:29:17.609 Firmware Activation Notices: Not Supported 00:29:17.609 ANA Change Notices: Not Supported 00:29:17.609 PLE Aggregate Log Change Notices: Not Supported 00:29:17.609 LBA Status Info Alert Notices: Not Supported 00:29:17.609 EGE Aggregate Log Change Notices: Not Supported 00:29:17.609 Normal NVM Subsystem Shutdown event: Not Supported 00:29:17.609 Zone Descriptor Change Notices: Not Supported 00:29:17.609 Discovery Log Change Notices: Supported 00:29:17.609 Controller Attributes 00:29:17.609 128-bit Host Identifier: Not Supported 00:29:17.609 Non-Operational Permissive Mode: Not Supported 00:29:17.609 NVM Sets: Not Supported 00:29:17.609 Read Recovery Levels: Not Supported 00:29:17.609 Endurance Groups: Not Supported 00:29:17.609 Predictable Latency Mode: Not Supported 00:29:17.609 Traffic Based Keep ALive: Not Supported 00:29:17.609 Namespace Granularity: Not Supported 00:29:17.609 SQ Associations: Not Supported 00:29:17.609 UUID List: Not Supported 00:29:17.609 Multi-Domain Subsystem: Not Supported 00:29:17.609 Fixed Capacity Management: Not Supported 00:29:17.609 Variable Capacity Management: Not Supported 00:29:17.609 Delete Endurance Group: Not Supported 00:29:17.609 Delete NVM Set: Not Supported 00:29:17.609 Extended LBA Formats Supported: Not Supported 00:29:17.609 Flexible Data Placement Supported: Not Supported 00:29:17.609 00:29:17.609 Controller Memory Buffer Support 00:29:17.609 ================================ 00:29:17.609 Supported: No 00:29:17.609 00:29:17.609 Persistent Memory Region Support 00:29:17.609 ================================ 00:29:17.609 Supported: No 00:29:17.609 00:29:17.609 Admin Command Set Attributes 00:29:17.609 ============================ 00:29:17.609 Security Send/Receive: Not Supported 00:29:17.609 Format NVM: Not Supported 00:29:17.609 Firmware Activate/Download: Not Supported 00:29:17.609 Namespace Management: Not Supported 00:29:17.609 Device Self-Test: Not Supported 00:29:17.609 Directives: Not Supported 00:29:17.609 NVMe-MI: Not Supported 00:29:17.609 Virtualization Management: Not Supported 00:29:17.609 Doorbell Buffer Config: Not Supported 00:29:17.609 Get LBA Status Capability: Not Supported 00:29:17.609 Command & Feature Lockdown Capability: Not Supported 00:29:17.609 Abort Command Limit: 1 00:29:17.609 Async Event Request Limit: 4 00:29:17.609 Number of Firmware Slots: N/A 00:29:17.609 Firmware Slot 1 Read-Only: N/A 00:29:17.609 Firmware Activation Without Reset: N/A 00:29:17.609 Multiple Update Detection Support: N/A 00:29:17.609 Firmware Update Granularity: No Information Provided 00:29:17.609 Per-Namespace SMART Log: No 00:29:17.609 Asymmetric Namespace Access Log Page: Not Supported 00:29:17.609 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:17.609 Command Effects Log Page: Not Supported 00:29:17.609 Get Log Page Extended Data: Supported 00:29:17.609 Telemetry Log Pages: Not Supported 00:29:17.609 Persistent Event Log Pages: Not Supported 00:29:17.609 Supported Log Pages Log Page: May Support 00:29:17.609 Commands Supported & Effects Log Page: Not Supported 00:29:17.609 Feature Identifiers & Effects Log Page:May Support 00:29:17.609 NVMe-MI Commands & Effects Log Page: May Support 00:29:17.609 Data Area 4 for Telemetry Log: Not Supported 00:29:17.609 Error Log Page Entries Supported: 128 00:29:17.609 Keep Alive: Not Supported 00:29:17.609 00:29:17.609 NVM Command Set Attributes 00:29:17.609 ========================== 00:29:17.609 Submission Queue Entry Size 00:29:17.609 Max: 1 00:29:17.609 Min: 1 00:29:17.609 Completion Queue Entry Size 00:29:17.609 Max: 1 00:29:17.609 Min: 1 00:29:17.609 Number of Namespaces: 0 00:29:17.609 Compare Command: Not Supported 00:29:17.609 Write Uncorrectable Command: Not Supported 00:29:17.609 Dataset Management Command: Not Supported 00:29:17.609 Write Zeroes Command: Not Supported 00:29:17.609 Set Features Save Field: Not Supported 00:29:17.609 Reservations: Not Supported 00:29:17.609 Timestamp: Not Supported 00:29:17.609 Copy: Not Supported 00:29:17.609 Volatile Write Cache: Not Present 00:29:17.609 Atomic Write Unit (Normal): 1 00:29:17.609 Atomic Write Unit (PFail): 1 00:29:17.609 Atomic Compare & Write Unit: 1 00:29:17.609 Fused Compare & Write: Supported 00:29:17.609 Scatter-Gather List 00:29:17.609 SGL Command Set: Supported 00:29:17.609 SGL Keyed: Supported 00:29:17.609 SGL Bit Bucket Descriptor: Not Supported 00:29:17.609 SGL Metadata Pointer: Not Supported 00:29:17.609 Oversized SGL: Not Supported 00:29:17.609 SGL Metadata Address: Not Supported 00:29:17.609 SGL Offset: Supported 00:29:17.609 Transport SGL Data Block: Not Supported 00:29:17.610 Replay Protected Memory Block: Not Supported 00:29:17.610 00:29:17.610 Firmware Slot Information 00:29:17.610 ========================= 00:29:17.610 Active slot: 0 00:29:17.610 00:29:17.610 00:29:17.610 Error Log 00:29:17.610 ========= 00:29:17.610 00:29:17.610 Active Namespaces 00:29:17.610 ================= 00:29:17.610 Discovery Log Page 00:29:17.610 ================== 00:29:17.610 Generation Counter: 2 00:29:17.610 Number of Records: 2 00:29:17.610 Record Format: 0 00:29:17.610 00:29:17.610 Discovery Log Entry 0 00:29:17.610 ---------------------- 00:29:17.610 Transport Type: 3 (TCP) 00:29:17.610 Address Family: 1 (IPv4) 00:29:17.610 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:17.610 Entry Flags: 00:29:17.610 Duplicate Returned Information: 1 00:29:17.610 Explicit Persistent Connection Support for Discovery: 1 00:29:17.610 Transport Requirements: 00:29:17.610 Secure Channel: Not Required 00:29:17.610 Port ID: 0 (0x0000) 00:29:17.610 Controller ID: 65535 (0xffff) 00:29:17.610 Admin Max SQ Size: 128 00:29:17.610 Transport Service Identifier: 4420 00:29:17.610 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:17.610 Transport Address: 10.0.0.2 00:29:17.610 Discovery Log Entry 1 00:29:17.610 ---------------------- 00:29:17.610 Transport Type: 3 (TCP) 00:29:17.610 Address Family: 1 (IPv4) 00:29:17.610 Subsystem Type: 2 (NVM Subsystem) 00:29:17.610 Entry Flags: 00:29:17.610 Duplicate Returned Information: 0 00:29:17.610 Explicit Persistent Connection Support for Discovery: 0 00:29:17.610 Transport Requirements: 00:29:17.610 Secure Channel: Not Required 00:29:17.610 Port ID: 0 (0x0000) 00:29:17.610 Controller ID: 65535 (0xffff) 00:29:17.610 Admin Max SQ Size: 128 00:29:17.610 Transport Service Identifier: 4420 00:29:17.610 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:17.610 Transport Address: 10.0.0.2 [2024-06-08 00:54:35.647513] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:17.610 [2024-06-08 00:54:35.647526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.610 [2024-06-08 00:54:35.647533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.610 [2024-06-08 00:54:35.647540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.610 [2024-06-08 00:54:35.647546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.610 [2024-06-08 00:54:35.647554] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.647558] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.647561] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.610 [2024-06-08 00:54:35.647569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.610 [2024-06-08 00:54:35.647581] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.610 [2024-06-08 00:54:35.647689] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.610 [2024-06-08 00:54:35.647696] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.610 [2024-06-08 00:54:35.647700] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.647703] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.610 [2024-06-08 00:54:35.647710] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.647714] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.647717] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.610 [2024-06-08 00:54:35.647724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.610 [2024-06-08 00:54:35.647736] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.610 [2024-06-08 00:54:35.647962] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.610 [2024-06-08 00:54:35.647968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.610 [2024-06-08 00:54:35.647971] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.647975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.610 [2024-06-08 00:54:35.647980] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:17.610 [2024-06-08 00:54:35.647984] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:17.610 [2024-06-08 00:54:35.647993] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.647997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648000] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.610 [2024-06-08 00:54:35.648007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.610 [2024-06-08 00:54:35.648017] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.610 [2024-06-08 00:54:35.648212] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.610 [2024-06-08 00:54:35.648218] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.610 [2024-06-08 00:54:35.648224] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648227] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.610 [2024-06-08 00:54:35.648237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648241] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648244] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.610 [2024-06-08 00:54:35.648251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.610 [2024-06-08 00:54:35.648260] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.610 [2024-06-08 00:54:35.648444] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.610 [2024-06-08 00:54:35.648451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.610 [2024-06-08 00:54:35.648454] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648458] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.610 [2024-06-08 00:54:35.648467] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648471] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648474] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.610 [2024-06-08 00:54:35.648481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.610 [2024-06-08 00:54:35.648490] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.610 [2024-06-08 00:54:35.648700] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.610 [2024-06-08 00:54:35.648706] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.610 [2024-06-08 00:54:35.648710] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648713] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.610 [2024-06-08 00:54:35.648723] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648730] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.610 [2024-06-08 00:54:35.648736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.610 [2024-06-08 00:54:35.648746] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.610 [2024-06-08 00:54:35.648946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.610 [2024-06-08 00:54:35.648952] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.610 [2024-06-08 00:54:35.648956] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648959] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.610 [2024-06-08 00:54:35.648969] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648972] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.648976] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.610 [2024-06-08 00:54:35.648982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.610 [2024-06-08 00:54:35.648992] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.610 [2024-06-08 00:54:35.649211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.610 [2024-06-08 00:54:35.649217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.610 [2024-06-08 00:54:35.649220] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.649226] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.610 [2024-06-08 00:54:35.649235] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.649239] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.649243] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.610 [2024-06-08 00:54:35.649249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.610 [2024-06-08 00:54:35.649259] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.610 [2024-06-08 00:54:35.649487] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.610 [2024-06-08 00:54:35.649496] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.610 [2024-06-08 00:54:35.649500] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.610 [2024-06-08 00:54:35.649503] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.610 [2024-06-08 00:54:35.649513] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.649517] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.649520] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.611 [2024-06-08 00:54:35.649527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.611 [2024-06-08 00:54:35.649538] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.611 [2024-06-08 00:54:35.649745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.611 [2024-06-08 00:54:35.649751] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.611 [2024-06-08 00:54:35.649755] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.649759] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.611 [2024-06-08 00:54:35.649768] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.649772] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.649775] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.611 [2024-06-08 00:54:35.649782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.611 [2024-06-08 00:54:35.649791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.611 [2024-06-08 00:54:35.649986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.611 [2024-06-08 00:54:35.649992] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.611 [2024-06-08 00:54:35.649995] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.649999] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.611 [2024-06-08 00:54:35.650008] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650015] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.611 [2024-06-08 00:54:35.650022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.611 [2024-06-08 00:54:35.650031] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.611 [2024-06-08 00:54:35.650211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.611 [2024-06-08 00:54:35.650217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.611 [2024-06-08 00:54:35.650221] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650224] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.611 [2024-06-08 00:54:35.650236] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650240] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650243] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.611 [2024-06-08 00:54:35.650250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.611 [2024-06-08 00:54:35.650259] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.611 [2024-06-08 00:54:35.650454] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.611 [2024-06-08 00:54:35.650460] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.611 [2024-06-08 00:54:35.650464] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650467] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.611 [2024-06-08 00:54:35.650477] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650481] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650484] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.611 [2024-06-08 00:54:35.650491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.611 [2024-06-08 00:54:35.650500] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.611 [2024-06-08 00:54:35.650712] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.611 [2024-06-08 00:54:35.650718] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.611 [2024-06-08 00:54:35.650722] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650725] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.611 [2024-06-08 00:54:35.650734] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650738] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650742] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.611 [2024-06-08 00:54:35.650748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.611 [2024-06-08 00:54:35.650758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.611 [2024-06-08 00:54:35.650943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.611 [2024-06-08 00:54:35.650949] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.611 [2024-06-08 00:54:35.650952] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650956] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.611 [2024-06-08 00:54:35.650965] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650969] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.650973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.611 [2024-06-08 00:54:35.650979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.611 [2024-06-08 00:54:35.650989] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.611 [2024-06-08 00:54:35.651180] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.611 [2024-06-08 00:54:35.651187] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.611 [2024-06-08 00:54:35.651190] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.651193] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.611 [2024-06-08 00:54:35.651205] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.651209] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.651212] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.611 [2024-06-08 00:54:35.651219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.611 [2024-06-08 00:54:35.651228] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.611 [2024-06-08 00:54:35.655410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.611 [2024-06-08 00:54:35.655419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.611 [2024-06-08 00:54:35.655422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.655426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.611 [2024-06-08 00:54:35.655436] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.655440] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.655443] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa93ec0) 00:29:17.611 [2024-06-08 00:54:35.655450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.611 [2024-06-08 00:54:35.655461] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb17210, cid 3, qid 0 00:29:17.611 [2024-06-08 00:54:35.655660] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.611 [2024-06-08 00:54:35.655666] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.611 [2024-06-08 00:54:35.655670] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.655673] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb17210) on tqpair=0xa93ec0 00:29:17.611 [2024-06-08 00:54:35.655681] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:29:17.611 00:29:17.611 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:17.611 [2024-06-08 00:54:35.695585] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:17.611 [2024-06-08 00:54:35.695626] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568833 ] 00:29:17.611 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.611 [2024-06-08 00:54:35.728955] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:17.611 [2024-06-08 00:54:35.728994] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:17.611 [2024-06-08 00:54:35.728999] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:17.611 [2024-06-08 00:54:35.729009] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:17.611 [2024-06-08 00:54:35.729016] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:17.611 [2024-06-08 00:54:35.729314] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:17.611 [2024-06-08 00:54:35.729336] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1830ec0 0 00:29:17.611 [2024-06-08 00:54:35.739410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:17.611 [2024-06-08 00:54:35.739424] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:17.611 [2024-06-08 00:54:35.739432] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:17.611 [2024-06-08 00:54:35.739436] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:17.611 [2024-06-08 00:54:35.739467] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.739472] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.611 [2024-06-08 00:54:35.739476] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1830ec0) 00:29:17.611 [2024-06-08 00:54:35.739488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:17.612 [2024-06-08 00:54:35.739505] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3df0, cid 0, qid 0 00:29:17.612 [2024-06-08 00:54:35.747413] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.612 [2024-06-08 00:54:35.747422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.612 [2024-06-08 00:54:35.747426] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.747430] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b3df0) on tqpair=0x1830ec0 00:29:17.612 [2024-06-08 00:54:35.747442] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:17.612 [2024-06-08 00:54:35.747448] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:17.612 [2024-06-08 00:54:35.747453] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:17.612 [2024-06-08 00:54:35.747463] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.747467] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.747471] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1830ec0) 00:29:17.612 [2024-06-08 00:54:35.747479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.612 [2024-06-08 00:54:35.747491] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3df0, cid 0, qid 0 00:29:17.612 [2024-06-08 00:54:35.747703] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.612 [2024-06-08 00:54:35.747710] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.612 [2024-06-08 00:54:35.747713] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.747717] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b3df0) on tqpair=0x1830ec0 00:29:17.612 [2024-06-08 00:54:35.747723] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:17.612 [2024-06-08 00:54:35.747730] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:17.612 [2024-06-08 00:54:35.747736] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.747740] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.747744] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1830ec0) 00:29:17.612 [2024-06-08 00:54:35.747750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.612 [2024-06-08 00:54:35.747761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3df0, cid 0, qid 0 00:29:17.612 [2024-06-08 00:54:35.747976] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.612 [2024-06-08 00:54:35.747983] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.612 [2024-06-08 00:54:35.747986] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.747989] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b3df0) on tqpair=0x1830ec0 00:29:17.612 [2024-06-08 00:54:35.747995] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:17.612 [2024-06-08 00:54:35.748006] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:17.612 [2024-06-08 00:54:35.748012] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748016] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1830ec0) 00:29:17.612 [2024-06-08 00:54:35.748026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.612 [2024-06-08 00:54:35.748036] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3df0, cid 0, qid 0 00:29:17.612 [2024-06-08 00:54:35.748251] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.612 [2024-06-08 00:54:35.748258] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.612 [2024-06-08 00:54:35.748261] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748265] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b3df0) on tqpair=0x1830ec0 00:29:17.612 [2024-06-08 00:54:35.748270] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:17.612 [2024-06-08 00:54:35.748279] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748286] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1830ec0) 00:29:17.612 [2024-06-08 00:54:35.748293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.612 [2024-06-08 00:54:35.748302] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3df0, cid 0, qid 0 00:29:17.612 [2024-06-08 00:54:35.748515] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.612 [2024-06-08 00:54:35.748522] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.612 [2024-06-08 00:54:35.748525] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748529] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b3df0) on tqpair=0x1830ec0 00:29:17.612 [2024-06-08 00:54:35.748534] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:17.612 [2024-06-08 00:54:35.748538] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:17.612 [2024-06-08 00:54:35.748546] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:17.612 [2024-06-08 00:54:35.748651] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:17.612 [2024-06-08 00:54:35.748654] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:17.612 [2024-06-08 00:54:35.748662] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748665] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1830ec0) 00:29:17.612 [2024-06-08 00:54:35.748675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.612 [2024-06-08 00:54:35.748685] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3df0, cid 0, qid 0 00:29:17.612 [2024-06-08 00:54:35.748889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.612 [2024-06-08 00:54:35.748895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.612 [2024-06-08 00:54:35.748899] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748902] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b3df0) on tqpair=0x1830ec0 00:29:17.612 [2024-06-08 00:54:35.748910] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:17.612 [2024-06-08 00:54:35.748919] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748923] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.748926] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1830ec0) 00:29:17.612 [2024-06-08 00:54:35.748933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.612 [2024-06-08 00:54:35.748942] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3df0, cid 0, qid 0 00:29:17.612 [2024-06-08 00:54:35.749151] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.612 [2024-06-08 00:54:35.749158] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.612 [2024-06-08 00:54:35.749161] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.749165] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b3df0) on tqpair=0x1830ec0 00:29:17.612 [2024-06-08 00:54:35.749170] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:17.612 [2024-06-08 00:54:35.749174] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:17.612 [2024-06-08 00:54:35.749182] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:17.612 [2024-06-08 00:54:35.749189] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:17.612 [2024-06-08 00:54:35.749198] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.749202] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1830ec0) 00:29:17.612 [2024-06-08 00:54:35.749209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.612 [2024-06-08 00:54:35.749219] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3df0, cid 0, qid 0 00:29:17.612 [2024-06-08 00:54:35.749489] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.612 [2024-06-08 00:54:35.749496] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.612 [2024-06-08 00:54:35.749499] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.749503] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1830ec0): datao=0, datal=4096, cccid=0 00:29:17.612 [2024-06-08 00:54:35.749508] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b3df0) on tqpair(0x1830ec0): expected_datao=0, payload_size=4096 00:29:17.612 [2024-06-08 00:54:35.749512] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.749519] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.749523] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.790614] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.612 [2024-06-08 00:54:35.790623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.612 [2024-06-08 00:54:35.790626] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.612 [2024-06-08 00:54:35.790630] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b3df0) on tqpair=0x1830ec0 00:29:17.612 [2024-06-08 00:54:35.790638] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:17.612 [2024-06-08 00:54:35.790643] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:17.612 [2024-06-08 00:54:35.790647] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:17.612 [2024-06-08 00:54:35.790654] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:17.612 [2024-06-08 00:54:35.790658] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:17.612 [2024-06-08 00:54:35.790663] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.790674] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.790682] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790686] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790690] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1830ec0) 00:29:17.613 [2024-06-08 00:54:35.790697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:17.613 [2024-06-08 00:54:35.790709] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3df0, cid 0, qid 0 00:29:17.613 [2024-06-08 00:54:35.790845] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.613 [2024-06-08 00:54:35.790851] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.613 [2024-06-08 00:54:35.790855] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790858] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b3df0) on tqpair=0x1830ec0 00:29:17.613 [2024-06-08 00:54:35.790868] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790872] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790875] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1830ec0) 00:29:17.613 [2024-06-08 00:54:35.790882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.613 [2024-06-08 00:54:35.790888] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790892] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790895] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1830ec0) 00:29:17.613 [2024-06-08 00:54:35.790901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.613 [2024-06-08 00:54:35.790907] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790910] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790914] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1830ec0) 00:29:17.613 [2024-06-08 00:54:35.790919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.613 [2024-06-08 00:54:35.790925] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790929] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790932] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.613 [2024-06-08 00:54:35.790938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.613 [2024-06-08 00:54:35.790943] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.790950] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.790957] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.790960] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1830ec0) 00:29:17.613 [2024-06-08 00:54:35.790969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.613 [2024-06-08 00:54:35.790981] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3df0, cid 0, qid 0 00:29:17.613 [2024-06-08 00:54:35.790986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b3f50, cid 1, qid 0 00:29:17.613 [2024-06-08 00:54:35.790990] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b40b0, cid 2, qid 0 00:29:17.613 [2024-06-08 00:54:35.790995] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.613 [2024-06-08 00:54:35.791000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4370, cid 4, qid 0 00:29:17.613 [2024-06-08 00:54:35.791235] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.613 [2024-06-08 00:54:35.791241] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.613 [2024-06-08 00:54:35.791244] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.791248] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4370) on tqpair=0x1830ec0 00:29:17.613 [2024-06-08 00:54:35.791254] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:17.613 [2024-06-08 00:54:35.791260] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.791268] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.791274] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.791280] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.791284] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.791287] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1830ec0) 00:29:17.613 [2024-06-08 00:54:35.791294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:17.613 [2024-06-08 00:54:35.791304] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4370, cid 4, qid 0 00:29:17.613 [2024-06-08 00:54:35.795410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.613 [2024-06-08 00:54:35.795418] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.613 [2024-06-08 00:54:35.795421] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.795425] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4370) on tqpair=0x1830ec0 00:29:17.613 [2024-06-08 00:54:35.795478] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.795488] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.795495] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.795499] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1830ec0) 00:29:17.613 [2024-06-08 00:54:35.795506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.613 [2024-06-08 00:54:35.795517] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4370, cid 4, qid 0 00:29:17.613 [2024-06-08 00:54:35.795719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.613 [2024-06-08 00:54:35.795725] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.613 [2024-06-08 00:54:35.795728] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.795732] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1830ec0): datao=0, datal=4096, cccid=4 00:29:17.613 [2024-06-08 00:54:35.795738] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b4370) on tqpair(0x1830ec0): expected_datao=0, payload_size=4096 00:29:17.613 [2024-06-08 00:54:35.795743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.795749] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.795753] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.795914] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.613 [2024-06-08 00:54:35.795921] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.613 [2024-06-08 00:54:35.795924] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.795928] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4370) on tqpair=0x1830ec0 00:29:17.613 [2024-06-08 00:54:35.795939] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:17.613 [2024-06-08 00:54:35.795952] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.795961] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:17.613 [2024-06-08 00:54:35.795968] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.795971] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1830ec0) 00:29:17.613 [2024-06-08 00:54:35.795978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.613 [2024-06-08 00:54:35.795989] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4370, cid 4, qid 0 00:29:17.613 [2024-06-08 00:54:35.796218] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.613 [2024-06-08 00:54:35.796224] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.613 [2024-06-08 00:54:35.796227] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.613 [2024-06-08 00:54:35.796231] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1830ec0): datao=0, datal=4096, cccid=4 00:29:17.614 [2024-06-08 00:54:35.796235] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b4370) on tqpair(0x1830ec0): expected_datao=0, payload_size=4096 00:29:17.614 [2024-06-08 00:54:35.796239] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796246] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796250] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796413] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.614 [2024-06-08 00:54:35.796420] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.614 [2024-06-08 00:54:35.796423] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796427] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4370) on tqpair=0x1830ec0 00:29:17.614 [2024-06-08 00:54:35.796437] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:17.614 [2024-06-08 00:54:35.796446] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:17.614 [2024-06-08 00:54:35.796453] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796456] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1830ec0) 00:29:17.614 [2024-06-08 00:54:35.796463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.614 [2024-06-08 00:54:35.796473] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4370, cid 4, qid 0 00:29:17.614 [2024-06-08 00:54:35.796678] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.614 [2024-06-08 00:54:35.796685] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.614 [2024-06-08 00:54:35.796688] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796692] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1830ec0): datao=0, datal=4096, cccid=4 00:29:17.614 [2024-06-08 00:54:35.796696] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b4370) on tqpair(0x1830ec0): expected_datao=0, payload_size=4096 00:29:17.614 [2024-06-08 00:54:35.796700] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796707] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796710] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796891] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.614 [2024-06-08 00:54:35.796897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.614 [2024-06-08 00:54:35.796901] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796904] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4370) on tqpair=0x1830ec0 00:29:17.614 [2024-06-08 00:54:35.796914] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:17.614 [2024-06-08 00:54:35.796922] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:17.614 [2024-06-08 00:54:35.796930] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:17.614 [2024-06-08 00:54:35.796935] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:17.614 [2024-06-08 00:54:35.796940] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:17.614 [2024-06-08 00:54:35.796945] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:17.614 [2024-06-08 00:54:35.796950] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:17.614 [2024-06-08 00:54:35.796955] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:17.614 [2024-06-08 00:54:35.796970] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796974] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1830ec0) 00:29:17.614 [2024-06-08 00:54:35.796980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.614 [2024-06-08 00:54:35.796987] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796991] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.796994] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1830ec0) 00:29:17.614 [2024-06-08 00:54:35.797001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:17.614 [2024-06-08 00:54:35.797013] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4370, cid 4, qid 0 00:29:17.614 [2024-06-08 00:54:35.797018] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b44d0, cid 5, qid 0 00:29:17.614 [2024-06-08 00:54:35.797237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.614 [2024-06-08 00:54:35.797243] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.614 [2024-06-08 00:54:35.797247] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.797250] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4370) on tqpair=0x1830ec0 00:29:17.614 [2024-06-08 00:54:35.797260] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.614 [2024-06-08 00:54:35.797266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.614 [2024-06-08 00:54:35.797269] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.797273] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b44d0) on tqpair=0x1830ec0 00:29:17.614 [2024-06-08 00:54:35.797283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.797286] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1830ec0) 00:29:17.614 [2024-06-08 00:54:35.797293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.614 [2024-06-08 00:54:35.797302] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b44d0, cid 5, qid 0 00:29:17.614 [2024-06-08 00:54:35.797524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.614 [2024-06-08 00:54:35.797530] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.614 [2024-06-08 00:54:35.797534] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.797537] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b44d0) on tqpair=0x1830ec0 00:29:17.614 [2024-06-08 00:54:35.797547] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.797551] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1830ec0) 00:29:17.614 [2024-06-08 00:54:35.797557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.614 [2024-06-08 00:54:35.797566] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b44d0, cid 5, qid 0 00:29:17.614 [2024-06-08 00:54:35.797772] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.614 [2024-06-08 00:54:35.797779] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.614 [2024-06-08 00:54:35.797782] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.797786] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b44d0) on tqpair=0x1830ec0 00:29:17.614 [2024-06-08 00:54:35.797795] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.797799] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1830ec0) 00:29:17.614 [2024-06-08 00:54:35.797805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.614 [2024-06-08 00:54:35.797815] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b44d0, cid 5, qid 0 00:29:17.614 [2024-06-08 00:54:35.798037] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.614 [2024-06-08 00:54:35.798043] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.614 [2024-06-08 00:54:35.798047] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.798050] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b44d0) on tqpair=0x1830ec0 00:29:17.614 [2024-06-08 00:54:35.798062] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.798066] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1830ec0) 00:29:17.614 [2024-06-08 00:54:35.798072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.614 [2024-06-08 00:54:35.798079] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.798082] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1830ec0) 00:29:17.614 [2024-06-08 00:54:35.798089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.614 [2024-06-08 00:54:35.798096] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.798101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1830ec0) 00:29:17.614 [2024-06-08 00:54:35.798107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.614 [2024-06-08 00:54:35.798117] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.798121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1830ec0) 00:29:17.614 [2024-06-08 00:54:35.798127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.614 [2024-06-08 00:54:35.798138] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b44d0, cid 5, qid 0 00:29:17.614 [2024-06-08 00:54:35.798143] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4370, cid 4, qid 0 00:29:17.614 [2024-06-08 00:54:35.798147] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4630, cid 6, qid 0 00:29:17.614 [2024-06-08 00:54:35.798152] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4790, cid 7, qid 0 00:29:17.614 [2024-06-08 00:54:35.798420] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.614 [2024-06-08 00:54:35.798427] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.614 [2024-06-08 00:54:35.798430] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.798434] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1830ec0): datao=0, datal=8192, cccid=5 00:29:17.614 [2024-06-08 00:54:35.798438] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b44d0) on tqpair(0x1830ec0): expected_datao=0, payload_size=8192 00:29:17.614 [2024-06-08 00:54:35.798442] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.798532] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.614 [2024-06-08 00:54:35.798536] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798541] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.615 [2024-06-08 00:54:35.798547] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.615 [2024-06-08 00:54:35.798550] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798554] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1830ec0): datao=0, datal=512, cccid=4 00:29:17.615 [2024-06-08 00:54:35.798558] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b4370) on tqpair(0x1830ec0): expected_datao=0, payload_size=512 00:29:17.615 [2024-06-08 00:54:35.798562] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798569] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798572] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798578] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.615 [2024-06-08 00:54:35.798583] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.615 [2024-06-08 00:54:35.798587] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798590] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1830ec0): datao=0, datal=512, cccid=6 00:29:17.615 [2024-06-08 00:54:35.798594] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b4630) on tqpair(0x1830ec0): expected_datao=0, payload_size=512 00:29:17.615 [2024-06-08 00:54:35.798598] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798605] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798608] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798614] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:17.615 [2024-06-08 00:54:35.798619] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:17.615 [2024-06-08 00:54:35.798623] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798628] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1830ec0): datao=0, datal=4096, cccid=7 00:29:17.615 [2024-06-08 00:54:35.798632] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b4790) on tqpair(0x1830ec0): expected_datao=0, payload_size=4096 00:29:17.615 [2024-06-08 00:54:35.798637] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798648] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798651] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798848] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.615 [2024-06-08 00:54:35.798854] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.615 [2024-06-08 00:54:35.798858] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798862] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b44d0) on tqpair=0x1830ec0 00:29:17.615 [2024-06-08 00:54:35.798874] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.615 [2024-06-08 00:54:35.798880] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.615 [2024-06-08 00:54:35.798883] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798887] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4370) on tqpair=0x1830ec0 00:29:17.615 [2024-06-08 00:54:35.798896] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.615 [2024-06-08 00:54:35.798902] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.615 [2024-06-08 00:54:35.798905] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798909] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4630) on tqpair=0x1830ec0 00:29:17.615 [2024-06-08 00:54:35.798918] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.615 [2024-06-08 00:54:35.798924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.615 [2024-06-08 00:54:35.798927] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.615 [2024-06-08 00:54:35.798931] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4790) on tqpair=0x1830ec0 00:29:17.615 ===================================================== 00:29:17.615 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.615 ===================================================== 00:29:17.615 Controller Capabilities/Features 00:29:17.615 ================================ 00:29:17.615 Vendor ID: 8086 00:29:17.615 Subsystem Vendor ID: 8086 00:29:17.615 Serial Number: SPDK00000000000001 00:29:17.615 Model Number: SPDK bdev Controller 00:29:17.615 Firmware Version: 24.09 00:29:17.615 Recommended Arb Burst: 6 00:29:17.615 IEEE OUI Identifier: e4 d2 5c 00:29:17.615 Multi-path I/O 00:29:17.615 May have multiple subsystem ports: Yes 00:29:17.615 May have multiple controllers: Yes 00:29:17.615 Associated with SR-IOV VF: No 00:29:17.615 Max Data Transfer Size: 131072 00:29:17.615 Max Number of Namespaces: 32 00:29:17.615 Max Number of I/O Queues: 127 00:29:17.615 NVMe Specification Version (VS): 1.3 00:29:17.615 NVMe Specification Version (Identify): 1.3 00:29:17.615 Maximum Queue Entries: 128 00:29:17.615 Contiguous Queues Required: Yes 00:29:17.615 Arbitration Mechanisms Supported 00:29:17.615 Weighted Round Robin: Not Supported 00:29:17.615 Vendor Specific: Not Supported 00:29:17.615 Reset Timeout: 15000 ms 00:29:17.615 Doorbell Stride: 4 bytes 00:29:17.615 NVM Subsystem Reset: Not Supported 00:29:17.615 Command Sets Supported 00:29:17.615 NVM Command Set: Supported 00:29:17.615 Boot Partition: Not Supported 00:29:17.615 Memory Page Size Minimum: 4096 bytes 00:29:17.615 Memory Page Size Maximum: 4096 bytes 00:29:17.615 Persistent Memory Region: Not Supported 00:29:17.615 Optional Asynchronous Events Supported 00:29:17.615 Namespace Attribute Notices: Supported 00:29:17.615 Firmware Activation Notices: Not Supported 00:29:17.615 ANA Change Notices: Not Supported 00:29:17.615 PLE Aggregate Log Change Notices: Not Supported 00:29:17.615 LBA Status Info Alert Notices: Not Supported 00:29:17.615 EGE Aggregate Log Change Notices: Not Supported 00:29:17.615 Normal NVM Subsystem Shutdown event: Not Supported 00:29:17.615 Zone Descriptor Change Notices: Not Supported 00:29:17.615 Discovery Log Change Notices: Not Supported 00:29:17.615 Controller Attributes 00:29:17.615 128-bit Host Identifier: Supported 00:29:17.615 Non-Operational Permissive Mode: Not Supported 00:29:17.615 NVM Sets: Not Supported 00:29:17.615 Read Recovery Levels: Not Supported 00:29:17.615 Endurance Groups: Not Supported 00:29:17.615 Predictable Latency Mode: Not Supported 00:29:17.615 Traffic Based Keep ALive: Not Supported 00:29:17.615 Namespace Granularity: Not Supported 00:29:17.615 SQ Associations: Not Supported 00:29:17.615 UUID List: Not Supported 00:29:17.615 Multi-Domain Subsystem: Not Supported 00:29:17.615 Fixed Capacity Management: Not Supported 00:29:17.615 Variable Capacity Management: Not Supported 00:29:17.615 Delete Endurance Group: Not Supported 00:29:17.615 Delete NVM Set: Not Supported 00:29:17.615 Extended LBA Formats Supported: Not Supported 00:29:17.615 Flexible Data Placement Supported: Not Supported 00:29:17.615 00:29:17.615 Controller Memory Buffer Support 00:29:17.615 ================================ 00:29:17.615 Supported: No 00:29:17.615 00:29:17.615 Persistent Memory Region Support 00:29:17.615 ================================ 00:29:17.615 Supported: No 00:29:17.615 00:29:17.615 Admin Command Set Attributes 00:29:17.615 ============================ 00:29:17.615 Security Send/Receive: Not Supported 00:29:17.615 Format NVM: Not Supported 00:29:17.615 Firmware Activate/Download: Not Supported 00:29:17.615 Namespace Management: Not Supported 00:29:17.615 Device Self-Test: Not Supported 00:29:17.615 Directives: Not Supported 00:29:17.615 NVMe-MI: Not Supported 00:29:17.615 Virtualization Management: Not Supported 00:29:17.615 Doorbell Buffer Config: Not Supported 00:29:17.615 Get LBA Status Capability: Not Supported 00:29:17.615 Command & Feature Lockdown Capability: Not Supported 00:29:17.615 Abort Command Limit: 4 00:29:17.615 Async Event Request Limit: 4 00:29:17.615 Number of Firmware Slots: N/A 00:29:17.615 Firmware Slot 1 Read-Only: N/A 00:29:17.615 Firmware Activation Without Reset: N/A 00:29:17.615 Multiple Update Detection Support: N/A 00:29:17.615 Firmware Update Granularity: No Information Provided 00:29:17.615 Per-Namespace SMART Log: No 00:29:17.615 Asymmetric Namespace Access Log Page: Not Supported 00:29:17.615 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:17.615 Command Effects Log Page: Supported 00:29:17.615 Get Log Page Extended Data: Supported 00:29:17.615 Telemetry Log Pages: Not Supported 00:29:17.615 Persistent Event Log Pages: Not Supported 00:29:17.615 Supported Log Pages Log Page: May Support 00:29:17.615 Commands Supported & Effects Log Page: Not Supported 00:29:17.615 Feature Identifiers & Effects Log Page:May Support 00:29:17.615 NVMe-MI Commands & Effects Log Page: May Support 00:29:17.615 Data Area 4 for Telemetry Log: Not Supported 00:29:17.615 Error Log Page Entries Supported: 128 00:29:17.615 Keep Alive: Supported 00:29:17.615 Keep Alive Granularity: 10000 ms 00:29:17.615 00:29:17.615 NVM Command Set Attributes 00:29:17.615 ========================== 00:29:17.615 Submission Queue Entry Size 00:29:17.615 Max: 64 00:29:17.615 Min: 64 00:29:17.615 Completion Queue Entry Size 00:29:17.615 Max: 16 00:29:17.615 Min: 16 00:29:17.615 Number of Namespaces: 32 00:29:17.615 Compare Command: Supported 00:29:17.615 Write Uncorrectable Command: Not Supported 00:29:17.615 Dataset Management Command: Supported 00:29:17.615 Write Zeroes Command: Supported 00:29:17.615 Set Features Save Field: Not Supported 00:29:17.616 Reservations: Supported 00:29:17.616 Timestamp: Not Supported 00:29:17.616 Copy: Supported 00:29:17.616 Volatile Write Cache: Present 00:29:17.616 Atomic Write Unit (Normal): 1 00:29:17.616 Atomic Write Unit (PFail): 1 00:29:17.616 Atomic Compare & Write Unit: 1 00:29:17.616 Fused Compare & Write: Supported 00:29:17.616 Scatter-Gather List 00:29:17.616 SGL Command Set: Supported 00:29:17.616 SGL Keyed: Supported 00:29:17.616 SGL Bit Bucket Descriptor: Not Supported 00:29:17.616 SGL Metadata Pointer: Not Supported 00:29:17.616 Oversized SGL: Not Supported 00:29:17.616 SGL Metadata Address: Not Supported 00:29:17.616 SGL Offset: Supported 00:29:17.616 Transport SGL Data Block: Not Supported 00:29:17.616 Replay Protected Memory Block: Not Supported 00:29:17.616 00:29:17.616 Firmware Slot Information 00:29:17.616 ========================= 00:29:17.616 Active slot: 1 00:29:17.616 Slot 1 Firmware Revision: 24.09 00:29:17.616 00:29:17.616 00:29:17.616 Commands Supported and Effects 00:29:17.616 ============================== 00:29:17.616 Admin Commands 00:29:17.616 -------------- 00:29:17.616 Get Log Page (02h): Supported 00:29:17.616 Identify (06h): Supported 00:29:17.616 Abort (08h): Supported 00:29:17.616 Set Features (09h): Supported 00:29:17.616 Get Features (0Ah): Supported 00:29:17.616 Asynchronous Event Request (0Ch): Supported 00:29:17.616 Keep Alive (18h): Supported 00:29:17.616 I/O Commands 00:29:17.616 ------------ 00:29:17.616 Flush (00h): Supported LBA-Change 00:29:17.616 Write (01h): Supported LBA-Change 00:29:17.616 Read (02h): Supported 00:29:17.616 Compare (05h): Supported 00:29:17.616 Write Zeroes (08h): Supported LBA-Change 00:29:17.616 Dataset Management (09h): Supported LBA-Change 00:29:17.616 Copy (19h): Supported LBA-Change 00:29:17.616 Unknown (79h): Supported LBA-Change 00:29:17.616 Unknown (7Ah): Supported 00:29:17.616 00:29:17.616 Error Log 00:29:17.616 ========= 00:29:17.616 00:29:17.616 Arbitration 00:29:17.616 =========== 00:29:17.616 Arbitration Burst: 1 00:29:17.616 00:29:17.616 Power Management 00:29:17.616 ================ 00:29:17.616 Number of Power States: 1 00:29:17.616 Current Power State: Power State #0 00:29:17.616 Power State #0: 00:29:17.616 Max Power: 0.00 W 00:29:17.616 Non-Operational State: Operational 00:29:17.616 Entry Latency: Not Reported 00:29:17.616 Exit Latency: Not Reported 00:29:17.616 Relative Read Throughput: 0 00:29:17.616 Relative Read Latency: 0 00:29:17.616 Relative Write Throughput: 0 00:29:17.616 Relative Write Latency: 0 00:29:17.616 Idle Power: Not Reported 00:29:17.616 Active Power: Not Reported 00:29:17.616 Non-Operational Permissive Mode: Not Supported 00:29:17.616 00:29:17.616 Health Information 00:29:17.616 ================== 00:29:17.616 Critical Warnings: 00:29:17.616 Available Spare Space: OK 00:29:17.616 Temperature: OK 00:29:17.616 Device Reliability: OK 00:29:17.616 Read Only: No 00:29:17.616 Volatile Memory Backup: OK 00:29:17.616 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:17.616 Temperature Threshold: [2024-06-08 00:54:35.799029] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.799035] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1830ec0) 00:29:17.616 [2024-06-08 00:54:35.799042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.616 [2024-06-08 00:54:35.799054] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4790, cid 7, qid 0 00:29:17.616 [2024-06-08 00:54:35.799262] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.616 [2024-06-08 00:54:35.799268] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.616 [2024-06-08 00:54:35.799272] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.799275] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4790) on tqpair=0x1830ec0 00:29:17.616 [2024-06-08 00:54:35.799302] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:17.616 [2024-06-08 00:54:35.799314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.616 [2024-06-08 00:54:35.799320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.616 [2024-06-08 00:54:35.799326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.616 [2024-06-08 00:54:35.799332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.616 [2024-06-08 00:54:35.799340] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.799346] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.799349] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.616 [2024-06-08 00:54:35.799356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.616 [2024-06-08 00:54:35.799368] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.616 [2024-06-08 00:54:35.803409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.616 [2024-06-08 00:54:35.803417] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.616 [2024-06-08 00:54:35.803420] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.803424] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.616 [2024-06-08 00:54:35.803432] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.803435] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.803439] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.616 [2024-06-08 00:54:35.803445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.616 [2024-06-08 00:54:35.803460] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.616 [2024-06-08 00:54:35.803666] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.616 [2024-06-08 00:54:35.803672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.616 [2024-06-08 00:54:35.803675] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.803679] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.616 [2024-06-08 00:54:35.803684] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:17.616 [2024-06-08 00:54:35.803689] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:17.616 [2024-06-08 00:54:35.803698] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.803702] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.803705] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.616 [2024-06-08 00:54:35.803712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.616 [2024-06-08 00:54:35.803722] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.616 [2024-06-08 00:54:35.803930] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.616 [2024-06-08 00:54:35.803936] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.616 [2024-06-08 00:54:35.803939] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.803943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.616 [2024-06-08 00:54:35.803953] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.803957] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.803960] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.616 [2024-06-08 00:54:35.803967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.616 [2024-06-08 00:54:35.803976] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.616 [2024-06-08 00:54:35.804188] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.616 [2024-06-08 00:54:35.804194] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.616 [2024-06-08 00:54:35.804197] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.804201] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.616 [2024-06-08 00:54:35.804214] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.804218] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.804221] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.616 [2024-06-08 00:54:35.804228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.616 [2024-06-08 00:54:35.804237] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.616 [2024-06-08 00:54:35.804458] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.616 [2024-06-08 00:54:35.804465] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.616 [2024-06-08 00:54:35.804469] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.804472] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.616 [2024-06-08 00:54:35.804483] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.804486] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.616 [2024-06-08 00:54:35.804490] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.616 [2024-06-08 00:54:35.804496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.616 [2024-06-08 00:54:35.804506] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.804710] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.804716] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.804719] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.804723] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.804733] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.804737] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.804740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.804747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.804756] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.804980] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.804986] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.804989] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.804993] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.805003] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805007] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805010] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.805017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.805026] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.805235] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.805241] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.805244] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805248] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.805260] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805264] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.805274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.805283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.805493] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.805500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.805503] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805507] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.805517] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805521] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805524] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.805531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.805541] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.805708] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.805714] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.805717] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805721] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.805731] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805735] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805738] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.805745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.805754] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.805957] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.805963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.805966] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.805980] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805983] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.805987] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.805993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.806003] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.806219] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.806225] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.806229] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.806232] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.806242] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.806248] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.806251] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.806258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.806268] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.806497] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.806504] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.806507] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.806511] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.806521] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.806524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.806528] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.806535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.806544] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.806751] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.806757] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.806760] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.806764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.806774] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.806778] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.806781] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.806788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.806797] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.807017] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.807023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.807026] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.807030] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.807040] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.807044] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.807047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.807054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.807063] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.807291] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.807296] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.807300] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.807303] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.807314] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.807317] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.807322] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1830ec0) 00:29:17.617 [2024-06-08 00:54:35.807329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.617 [2024-06-08 00:54:35.807339] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b4210, cid 3, qid 0 00:29:17.617 [2024-06-08 00:54:35.811409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:17.617 [2024-06-08 00:54:35.811417] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:17.617 [2024-06-08 00:54:35.811420] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:17.617 [2024-06-08 00:54:35.811424] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18b4210) on tqpair=0x1830ec0 00:29:17.617 [2024-06-08 00:54:35.811433] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:29:17.617 0 Kelvin (-273 Celsius) 00:29:17.617 Available Spare: 0% 00:29:17.617 Available Spare Threshold: 0% 00:29:17.617 Life Percentage Used: 0% 00:29:17.617 Data Units Read: 0 00:29:17.617 Data Units Written: 0 00:29:17.618 Host Read Commands: 0 00:29:17.618 Host Write Commands: 0 00:29:17.618 Controller Busy Time: 0 minutes 00:29:17.618 Power Cycles: 0 00:29:17.618 Power On Hours: 0 hours 00:29:17.618 Unsafe Shutdowns: 0 00:29:17.618 Unrecoverable Media Errors: 0 00:29:17.618 Lifetime Error Log Entries: 0 00:29:17.618 Warning Temperature Time: 0 minutes 00:29:17.618 Critical Temperature Time: 0 minutes 00:29:17.618 00:29:17.618 Number of Queues 00:29:17.618 ================ 00:29:17.618 Number of I/O Submission Queues: 127 00:29:17.618 Number of I/O Completion Queues: 127 00:29:17.618 00:29:17.618 Active Namespaces 00:29:17.618 ================= 00:29:17.618 Namespace ID:1 00:29:17.618 Error Recovery Timeout: Unlimited 00:29:17.618 Command Set Identifier: NVM (00h) 00:29:17.618 Deallocate: Supported 00:29:17.618 Deallocated/Unwritten Error: Not Supported 00:29:17.618 Deallocated Read Value: Unknown 00:29:17.618 Deallocate in Write Zeroes: Not Supported 00:29:17.618 Deallocated Guard Field: 0xFFFF 00:29:17.618 Flush: Supported 00:29:17.618 Reservation: Supported 00:29:17.618 Namespace Sharing Capabilities: Multiple Controllers 00:29:17.618 Size (in LBAs): 131072 (0GiB) 00:29:17.618 Capacity (in LBAs): 131072 (0GiB) 00:29:17.618 Utilization (in LBAs): 131072 (0GiB) 00:29:17.618 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:17.618 EUI64: ABCDEF0123456789 00:29:17.618 UUID: 1d9fb220-90d6-49a5-b4ba-6747dbed07a3 00:29:17.618 Thin Provisioning: Not Supported 00:29:17.618 Per-NS Atomic Units: Yes 00:29:17.618 Atomic Boundary Size (Normal): 0 00:29:17.618 Atomic Boundary Size (PFail): 0 00:29:17.618 Atomic Boundary Offset: 0 00:29:17.618 Maximum Single Source Range Length: 65535 00:29:17.618 Maximum Copy Length: 65535 00:29:17.618 Maximum Source Range Count: 1 00:29:17.618 NGUID/EUI64 Never Reused: No 00:29:17.618 Namespace Write Protected: No 00:29:17.618 Number of LBA Formats: 1 00:29:17.618 Current LBA Format: LBA Format #00 00:29:17.618 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:17.618 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:17.618 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:17.618 rmmod nvme_tcp 00:29:17.618 rmmod nvme_fabrics 00:29:17.618 rmmod nvme_keyring 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 568509 ']' 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 568509 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 568509 ']' 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 568509 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 568509 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 568509' 00:29:17.878 killing process with pid 568509 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 568509 00:29:17.878 00:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 568509 00:29:17.878 00:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.878 00:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:17.878 00:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:17.878 00:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:17.878 00:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:17.878 00:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.878 00:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.878 00:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.425 00:54:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:20.425 00:29:20.425 real 0m10.822s 00:29:20.425 user 0m7.849s 00:29:20.425 sys 0m5.488s 00:29:20.425 00:54:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:20.425 00:54:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.425 ************************************ 00:29:20.425 END TEST nvmf_identify 00:29:20.425 ************************************ 00:29:20.425 00:54:38 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:20.425 00:54:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:20.425 00:54:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:20.425 00:54:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.425 ************************************ 00:29:20.425 START TEST nvmf_perf 00:29:20.425 ************************************ 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:20.425 * Looking for test storage... 00:29:20.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:20.425 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:27.018 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:27.018 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:27.018 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:27.018 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.018 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:27.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:29:27.280 00:29:27.280 --- 10.0.0.2 ping statistics --- 00:29:27.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.280 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:29:27.280 00:29:27.280 --- 10.0.0.1 ping statistics --- 00:29:27.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.280 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:27.280 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:27.540 00:54:45 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:27.540 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:27.540 00:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:27.541 00:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.541 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=572855 00:29:27.541 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 572855 00:29:27.541 00:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:27.541 00:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 572855 ']' 00:29:27.541 00:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.541 00:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:27.541 00:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.541 00:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:27.541 00:54:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.541 [2024-06-08 00:54:45.632195] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:27.541 [2024-06-08 00:54:45.632256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.541 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.541 [2024-06-08 00:54:45.701894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:27.541 [2024-06-08 00:54:45.776694] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.541 [2024-06-08 00:54:45.776731] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.541 [2024-06-08 00:54:45.776739] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.541 [2024-06-08 00:54:45.776745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.541 [2024-06-08 00:54:45.776750] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.541 [2024-06-08 00:54:45.776887] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.541 [2024-06-08 00:54:45.777000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.541 [2024-06-08 00:54:45.777156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.541 [2024-06-08 00:54:45.777156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.482 00:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:28.482 00:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:29:28.482 00:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:28.482 00:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:28.482 00:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:28.482 00:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.482 00:54:46 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:28.482 00:54:46 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:28.742 00:54:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:28.742 00:54:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:29.003 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:29:29.003 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:29.003 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:29.003 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:29:29.003 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:29.003 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:29.003 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:29.263 [2024-06-08 00:54:47.421679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.263 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:29.524 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:29.524 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:29.524 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:29.524 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:29.785 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.046 [2024-06-08 00:54:48.100234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.046 00:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:30.046 00:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:29:30.046 00:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:30.046 00:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:30.046 00:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:31.431 Initializing NVMe Controllers 00:29:31.431 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:29:31.431 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:29:31.431 Initialization complete. Launching workers. 00:29:31.431 ======================================================== 00:29:31.431 Latency(us) 00:29:31.431 Device Information : IOPS MiB/s Average min max 00:29:31.431 PCIE (0000:65:00.0) NSID 1 from core 0: 79790.92 311.68 400.50 13.47 4563.20 00:29:31.431 ======================================================== 00:29:31.431 Total : 79790.92 311.68 400.50 13.47 4563.20 00:29:31.431 00:29:31.431 00:54:49 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:31.431 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.816 Initializing NVMe Controllers 00:29:32.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:32.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:32.816 Initialization complete. Launching workers. 00:29:32.816 ======================================================== 00:29:32.816 Latency(us) 00:29:32.816 Device Information : IOPS MiB/s Average min max 00:29:32.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 57.79 0.23 17716.58 439.94 45946.81 00:29:32.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 58.79 0.23 17416.36 7036.29 47903.79 00:29:32.816 ======================================================== 00:29:32.816 Total : 116.58 0.46 17565.19 439.94 47903.79 00:29:32.816 00:29:32.816 00:54:50 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:32.816 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.201 Initializing NVMe Controllers 00:29:34.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:34.201 Initialization complete. Launching workers. 00:29:34.201 ======================================================== 00:29:34.201 Latency(us) 00:29:34.201 Device Information : IOPS MiB/s Average min max 00:29:34.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10076.93 39.36 3190.48 492.37 44603.15 00:29:34.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3731.97 14.58 8613.17 4997.66 23260.22 00:29:34.201 ======================================================== 00:29:34.201 Total : 13808.90 53.94 4656.01 492.37 44603.15 00:29:34.201 00:29:34.201 00:54:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:34.201 00:54:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:34.201 00:54:52 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.201 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.812 Initializing NVMe Controllers 00:29:36.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.812 Controller IO queue size 128, less than required. 00:29:36.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:36.812 Controller IO queue size 128, less than required. 00:29:36.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:36.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:36.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:36.812 Initialization complete. Launching workers. 00:29:36.812 ======================================================== 00:29:36.812 Latency(us) 00:29:36.812 Device Information : IOPS MiB/s Average min max 00:29:36.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 949.27 237.32 137874.36 67450.66 195566.33 00:29:36.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 560.57 140.14 238011.69 89436.31 360572.62 00:29:36.812 ======================================================== 00:29:36.812 Total : 1509.84 377.46 175053.08 67450.66 360572.62 00:29:36.812 00:29:36.812 00:54:54 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:36.812 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.812 No valid NVMe controllers or AIO or URING devices found 00:29:36.812 Initializing NVMe Controllers 00:29:36.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.812 Controller IO queue size 128, less than required. 00:29:36.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:36.812 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:36.812 Controller IO queue size 128, less than required. 00:29:36.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:36.812 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:36.812 WARNING: Some requested NVMe devices were skipped 00:29:36.812 00:54:55 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:36.812 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.357 Initializing NVMe Controllers 00:29:39.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.357 Controller IO queue size 128, less than required. 00:29:39.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.357 Controller IO queue size 128, less than required. 00:29:39.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:39.357 Initialization complete. Launching workers. 00:29:39.357 00:29:39.357 ==================== 00:29:39.357 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:39.357 TCP transport: 00:29:39.357 polls: 28655 00:29:39.357 idle_polls: 9824 00:29:39.357 sock_completions: 18831 00:29:39.357 nvme_completions: 4123 00:29:39.357 submitted_requests: 6176 00:29:39.357 queued_requests: 1 00:29:39.357 00:29:39.357 ==================== 00:29:39.357 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:39.357 TCP transport: 00:29:39.357 polls: 27542 00:29:39.357 idle_polls: 8001 00:29:39.357 sock_completions: 19541 00:29:39.357 nvme_completions: 6265 00:29:39.357 submitted_requests: 9294 00:29:39.357 queued_requests: 1 00:29:39.357 ======================================================== 00:29:39.357 Latency(us) 00:29:39.357 Device Information : IOPS MiB/s Average min max 00:29:39.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1029.20 257.30 127944.91 68558.15 221271.95 00:29:39.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1564.02 391.00 83328.05 37688.96 131058.01 00:29:39.357 ======================================================== 00:29:39.357 Total : 2593.22 648.30 101035.61 37688.96 221271.95 00:29:39.357 00:29:39.357 00:54:57 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:39.357 00:54:57 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:39.618 00:54:57 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:39.618 00:54:57 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:29:39.618 00:54:57 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:40.564 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=cb10bb5d-3a35-4ffa-9fb1-3dea352dcc41 00:29:40.564 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb cb10bb5d-3a35-4ffa-9fb1-3dea352dcc41 00:29:40.564 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=cb10bb5d-3a35-4ffa-9fb1-3dea352dcc41 00:29:40.564 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:29:40.564 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:29:40.564 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:29:40.564 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:40.825 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:29:40.825 { 00:29:40.825 "uuid": "cb10bb5d-3a35-4ffa-9fb1-3dea352dcc41", 00:29:40.825 "name": "lvs_0", 00:29:40.825 "base_bdev": "Nvme0n1", 00:29:40.825 "total_data_clusters": 457407, 00:29:40.825 "free_clusters": 457407, 00:29:40.825 "block_size": 512, 00:29:40.825 "cluster_size": 4194304 00:29:40.825 } 00:29:40.825 ]' 00:29:40.825 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="cb10bb5d-3a35-4ffa-9fb1-3dea352dcc41") .free_clusters' 00:29:40.825 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=457407 00:29:40.825 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cb10bb5d-3a35-4ffa-9fb1-3dea352dcc41") .cluster_size' 00:29:40.825 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:29:40.825 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=1829628 00:29:40.825 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 1829628 00:29:40.825 1829628 00:29:40.825 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:29:40.825 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:40.825 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cb10bb5d-3a35-4ffa-9fb1-3dea352dcc41 lbd_0 20480 00:29:41.085 00:54:59 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=1566cf56-85a1-4f43-8a3d-019ffa326fa9 00:29:41.085 00:54:59 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1566cf56-85a1-4f43-8a3d-019ffa326fa9 lvs_n_0 00:29:42.998 00:55:00 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=0ae28001-6da6-40aa-b30a-72328a9668ba 00:29:42.998 00:55:00 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 0ae28001-6da6-40aa-b30a-72328a9668ba 00:29:42.998 00:55:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=0ae28001-6da6-40aa-b30a-72328a9668ba 00:29:42.998 00:55:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:29:42.998 00:55:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:29:42.998 00:55:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:29:42.998 00:55:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:42.998 00:55:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:29:42.998 { 00:29:42.998 "uuid": "cb10bb5d-3a35-4ffa-9fb1-3dea352dcc41", 00:29:42.998 "name": "lvs_0", 00:29:42.998 "base_bdev": "Nvme0n1", 00:29:42.998 "total_data_clusters": 457407, 00:29:42.998 "free_clusters": 452287, 00:29:42.998 "block_size": 512, 00:29:42.998 "cluster_size": 4194304 00:29:42.998 }, 00:29:42.998 { 00:29:42.998 "uuid": "0ae28001-6da6-40aa-b30a-72328a9668ba", 00:29:42.998 "name": "lvs_n_0", 00:29:42.998 "base_bdev": "1566cf56-85a1-4f43-8a3d-019ffa326fa9", 00:29:42.998 "total_data_clusters": 5114, 00:29:42.998 "free_clusters": 5114, 00:29:42.998 "block_size": 512, 00:29:42.998 "cluster_size": 4194304 00:29:42.998 } 00:29:42.998 ]' 00:29:42.998 00:55:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="0ae28001-6da6-40aa-b30a-72328a9668ba") .free_clusters' 00:29:42.998 00:55:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=5114 00:29:42.998 00:55:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="0ae28001-6da6-40aa-b30a-72328a9668ba") .cluster_size' 00:29:42.998 00:55:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:29:42.998 00:55:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=20456 00:29:42.998 00:55:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 20456 00:29:42.998 20456 00:29:42.998 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:42.998 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0ae28001-6da6-40aa-b30a-72328a9668ba lbd_nest_0 20456 00:29:42.998 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=75bb98bb-8c08-4c29-9062-d58f99c5781b 00:29:42.998 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:43.258 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:43.258 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 75bb98bb-8c08-4c29-9062-d58f99c5781b 00:29:43.519 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.519 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:43.519 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:43.519 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:43.519 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:43.519 00:55:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.519 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.748 Initializing NVMe Controllers 00:29:55.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.748 Initialization complete. Launching workers. 00:29:55.748 ======================================================== 00:29:55.748 Latency(us) 00:29:55.748 Device Information : IOPS MiB/s Average min max 00:29:55.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.30 0.02 22631.41 202.38 48521.68 00:29:55.748 ======================================================== 00:29:55.748 Total : 44.30 0.02 22631.41 202.38 48521.68 00:29:55.748 00:29:55.748 00:55:12 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:55.748 00:55:12 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.748 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.748 Initializing NVMe Controllers 00:30:05.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:05.748 Initialization complete. Launching workers. 00:30:05.748 ======================================================== 00:30:05.748 Latency(us) 00:30:05.748 Device Information : IOPS MiB/s Average min max 00:30:05.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.90 10.61 11795.23 4956.59 47888.12 00:30:05.748 ======================================================== 00:30:05.748 Total : 84.90 10.61 11795.23 4956.59 47888.12 00:30:05.748 00:30:05.748 00:55:22 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:05.749 00:55:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:05.749 00:55:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.749 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.740 Initializing NVMe Controllers 00:30:15.740 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:15.740 Initialization complete. Launching workers. 00:30:15.740 ======================================================== 00:30:15.740 Latency(us) 00:30:15.740 Device Information : IOPS MiB/s Average min max 00:30:15.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9105.79 4.45 3514.40 293.50 10224.84 00:30:15.741 ======================================================== 00:30:15.741 Total : 9105.79 4.45 3514.40 293.50 10224.84 00:30:15.741 00:30:15.741 00:55:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:15.741 00:55:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.741 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.781 Initializing NVMe Controllers 00:30:25.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:25.781 Initialization complete. Launching workers. 00:30:25.781 ======================================================== 00:30:25.781 Latency(us) 00:30:25.781 Device Information : IOPS MiB/s Average min max 00:30:25.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1846.00 230.75 17352.51 1477.74 40520.17 00:30:25.781 ======================================================== 00:30:25.781 Total : 1846.00 230.75 17352.51 1477.74 40520.17 00:30:25.781 00:30:25.781 00:55:43 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:25.781 00:55:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:25.781 00:55:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:25.781 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.778 Initializing NVMe Controllers 00:30:35.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.778 Controller IO queue size 128, less than required. 00:30:35.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:35.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.778 Initialization complete. Launching workers. 00:30:35.778 ======================================================== 00:30:35.779 Latency(us) 00:30:35.779 Device Information : IOPS MiB/s Average min max 00:30:35.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15729.03 7.68 8138.06 1954.85 48072.17 00:30:35.779 ======================================================== 00:30:35.779 Total : 15729.03 7.68 8138.06 1954.85 48072.17 00:30:35.779 00:30:35.779 00:55:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:35.779 00:55:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.779 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.797 Initializing NVMe Controllers 00:30:45.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:45.797 Controller IO queue size 128, less than required. 00:30:45.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:45.797 Initialization complete. Launching workers. 00:30:45.797 ======================================================== 00:30:45.797 Latency(us) 00:30:45.797 Device Information : IOPS MiB/s Average min max 00:30:45.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1139.90 142.49 112769.54 15128.61 242708.89 00:30:45.797 ======================================================== 00:30:45.797 Total : 1139.90 142.49 112769.54 15128.61 242708.89 00:30:45.797 00:30:45.797 00:56:03 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:46.057 00:56:04 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 75bb98bb-8c08-4c29-9062-d58f99c5781b 00:30:47.969 00:56:05 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:47.969 00:56:05 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1566cf56-85a1-4f43-8a3d-019ffa326fa9 00:30:47.969 00:56:06 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:48.229 rmmod nvme_tcp 00:30:48.229 rmmod nvme_fabrics 00:30:48.229 rmmod nvme_keyring 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 572855 ']' 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 572855 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 572855 ']' 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 572855 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 572855 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 572855' 00:30:48.229 killing process with pid 572855 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 572855 00:30:48.229 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 572855 00:30:50.143 00:56:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:50.143 00:56:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:50.143 00:56:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:50.143 00:56:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:50.143 00:56:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:50.143 00:56:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.143 00:56:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:50.143 00:56:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.692 00:56:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:52.692 00:30:52.692 real 1m32.232s 00:30:52.692 user 5m27.394s 00:30:52.692 sys 0m13.620s 00:30:52.692 00:56:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:52.692 00:56:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:52.692 ************************************ 00:30:52.692 END TEST nvmf_perf 00:30:52.692 ************************************ 00:30:52.692 00:56:10 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:52.692 00:56:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:52.692 00:56:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:52.692 00:56:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:52.692 ************************************ 00:30:52.692 START TEST nvmf_fio_host 00:30:52.692 ************************************ 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:52.692 * Looking for test storage... 00:30:52.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.692 00:56:10 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:52.693 00:56:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:59.287 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:59.287 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:59.287 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:59.287 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:59.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:30:59.287 00:30:59.287 --- 10.0.0.2 ping statistics --- 00:30:59.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.287 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:30:59.287 00:30:59.287 --- 10.0.0.1 ping statistics --- 00:30:59.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.287 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:59.287 00:56:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=592609 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 592609 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 592609 ']' 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.550 00:56:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:59.550 [2024-06-08 00:56:17.644784] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:30:59.550 [2024-06-08 00:56:17.644849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.550 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.550 [2024-06-08 00:56:17.715118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:59.550 [2024-06-08 00:56:17.789740] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.550 [2024-06-08 00:56:17.789778] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.550 [2024-06-08 00:56:17.789786] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.550 [2024-06-08 00:56:17.789797] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.550 [2024-06-08 00:56:17.789803] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.551 [2024-06-08 00:56:17.789945] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.551 [2024-06-08 00:56:17.790060] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.551 [2024-06-08 00:56:17.790218] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.551 [2024-06-08 00:56:17.790220] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:00.164 00:56:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:00.164 00:56:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:31:00.164 00:56:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:00.425 [2024-06-08 00:56:18.561339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.425 00:56:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:00.425 00:56:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:00.425 00:56:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.425 00:56:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:00.685 Malloc1 00:31:00.685 00:56:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:00.944 00:56:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:00.944 00:56:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.203 [2024-06-08 00:56:19.290831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:31:01.203 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:01.502 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:01.502 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:01.502 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.502 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:01.502 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:01.502 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:01.502 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:01.502 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:01.502 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:01.502 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:01.763 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:01.763 fio-3.35 00:31:01.763 Starting 1 thread 00:31:01.763 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.313 00:31:04.313 test: (groupid=0, jobs=1): err= 0: pid=593164: Sat Jun 8 00:56:22 2024 00:31:04.313 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(78.4MiB/2005msec) 00:31:04.313 slat (usec): min=2, max=280, avg= 2.27, stdev= 2.82 00:31:04.313 clat (usec): min=3673, max=10501, avg=6968.09, stdev=1400.79 00:31:04.313 lat (usec): min=3705, max=10503, avg=6970.35, stdev=1400.79 00:31:04.313 clat percentiles (usec): 00:31:04.313 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5669], 00:31:04.313 | 30.00th=[ 5866], 40.00th=[ 6063], 50.00th=[ 6390], 60.00th=[ 7504], 00:31:04.313 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9241], 00:31:04.313 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[10290], 99.95th=[10290], 00:31:04.313 | 99.99th=[10421] 00:31:04.314 bw ( KiB/s): min=32040, max=48136, per=99.88%, avg=39996.00, stdev=8474.11, samples=4 00:31:04.314 iops : min= 8010, max=12034, avg=9999.00, stdev=2118.53, samples=4 00:31:04.314 write: IOPS=10.0k, BW=39.1MiB/s (41.1MB/s)(78.5MiB/2005msec); 0 zone resets 00:31:04.314 slat (usec): min=2, max=278, avg= 2.37, stdev= 2.19 00:31:04.314 clat (usec): min=2897, max=9312, avg=5720.74, stdev=1201.84 00:31:04.314 lat (usec): min=2915, max=9314, avg=5723.11, stdev=1201.89 00:31:04.314 clat percentiles (usec): 00:31:04.314 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:31:04.314 | 30.00th=[ 4752], 40.00th=[ 4948], 50.00th=[ 5211], 60.00th=[ 6259], 00:31:04.314 | 70.00th=[ 6718], 80.00th=[ 7046], 90.00th=[ 7373], 95.00th=[ 7635], 00:31:04.314 | 99.00th=[ 8029], 99.50th=[ 8160], 99.90th=[ 8356], 99.95th=[ 8586], 00:31:04.314 | 99.99th=[ 8717] 00:31:04.314 bw ( KiB/s): min=32792, max=48200, per=99.99%, avg=40084.00, stdev=8229.13, samples=4 00:31:04.314 iops : min= 8198, max=12050, avg=10021.00, stdev=2057.28, samples=4 00:31:04.314 lat (msec) : 4=0.70%, 10=99.13%, 20=0.17% 00:31:04.314 cpu : usr=73.75%, sys=24.00%, ctx=73, majf=0, minf=5 00:31:04.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:04.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:04.314 issued rwts: total=20072,20095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:04.314 00:31:04.314 Run status group 0 (all jobs): 00:31:04.314 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=78.4MiB (82.2MB), run=2005-2005msec 00:31:04.314 WRITE: bw=39.1MiB/s (41.1MB/s), 39.1MiB/s-39.1MiB/s (41.1MB/s-41.1MB/s), io=78.5MiB (82.3MB), run=2005-2005msec 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:04.314 00:56:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:04.574 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:04.574 fio-3.35 00:31:04.574 Starting 1 thread 00:31:04.574 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.120 00:31:07.120 test: (groupid=0, jobs=1): err= 0: pid=593968: Sat Jun 8 00:56:25 2024 00:31:07.120 read: IOPS=8516, BW=133MiB/s (140MB/s)(267MiB/2007msec) 00:31:07.120 slat (usec): min=3, max=108, avg= 3.69, stdev= 1.70 00:31:07.120 clat (usec): min=3254, max=55530, avg=9214.25, stdev=3877.47 00:31:07.120 lat (usec): min=3258, max=55534, avg=9217.94, stdev=3877.60 00:31:07.120 clat percentiles (usec): 00:31:07.120 | 1.00th=[ 4555], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 6915], 00:31:07.120 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9372], 00:31:07.120 | 70.00th=[10159], 80.00th=[10945], 90.00th=[12387], 95.00th=[13435], 00:31:07.120 | 99.00th=[15401], 99.50th=[44827], 99.90th=[54264], 99.95th=[55313], 00:31:07.120 | 99.99th=[55313] 00:31:07.120 bw ( KiB/s): min=56032, max=78944, per=51.26%, avg=69840.00, stdev=11153.66, samples=4 00:31:07.120 iops : min= 3502, max= 4934, avg=4365.00, stdev=697.10, samples=4 00:31:07.120 write: IOPS=4999, BW=78.1MiB/s (81.9MB/s)(142MiB/1819msec); 0 zone resets 00:31:07.120 slat (usec): min=40, max=452, avg=41.31, stdev= 9.17 00:31:07.120 clat (usec): min=3686, max=55717, avg=9867.99, stdev=3150.53 00:31:07.120 lat (usec): min=3726, max=55758, avg=9909.29, stdev=3151.89 00:31:07.120 clat percentiles (usec): 00:31:07.120 | 1.00th=[ 6456], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8291], 00:31:07.120 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10028], 00:31:07.120 | 70.00th=[10421], 80.00th=[11076], 90.00th=[11863], 95.00th=[13042], 00:31:07.120 | 99.00th=[15401], 99.50th=[16712], 99.90th=[54789], 99.95th=[55313], 00:31:07.120 | 99.99th=[55837] 00:31:07.120 bw ( KiB/s): min=59520, max=81696, per=90.95%, avg=72752.00, stdev=10761.47, samples=4 00:31:07.120 iops : min= 3720, max= 5106, avg=4547.00, stdev=672.59, samples=4 00:31:07.120 lat (msec) : 4=0.18%, 10=65.57%, 20=33.75%, 50=0.12%, 100=0.37% 00:31:07.120 cpu : usr=83.00%, sys=13.71%, ctx=11, majf=0, minf=16 00:31:07.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:31:07.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:07.120 issued rwts: total=17092,9094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:07.120 00:31:07.120 Run status group 0 (all jobs): 00:31:07.120 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=267MiB (280MB), run=2007-2007msec 00:31:07.120 WRITE: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=142MiB (149MB), run=1819-1819msec 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # bdfs=() 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # local bdfs 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:31:07.120 00:56:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:31:07.692 Nvme0n1 00:31:07.692 00:56:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:08.264 00:56:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=60ecc037-8862-458e-93e5-699a8874d922 00:31:08.264 00:56:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 60ecc037-8862-458e-93e5-699a8874d922 00:31:08.264 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=60ecc037-8862-458e-93e5-699a8874d922 00:31:08.264 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:31:08.264 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:31:08.264 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:31:08.264 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:08.264 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:31:08.264 { 00:31:08.264 "uuid": "60ecc037-8862-458e-93e5-699a8874d922", 00:31:08.264 "name": "lvs_0", 00:31:08.264 "base_bdev": "Nvme0n1", 00:31:08.264 "total_data_clusters": 1787, 00:31:08.264 "free_clusters": 1787, 00:31:08.264 "block_size": 512, 00:31:08.264 "cluster_size": 1073741824 00:31:08.264 } 00:31:08.264 ]' 00:31:08.264 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="60ecc037-8862-458e-93e5-699a8874d922") .free_clusters' 00:31:08.525 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=1787 00:31:08.525 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="60ecc037-8862-458e-93e5-699a8874d922") .cluster_size' 00:31:08.525 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=1073741824 00:31:08.525 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1829888 00:31:08.525 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1829888 00:31:08.525 1829888 00:31:08.525 00:56:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:31:08.525 6751bc7c-30cb-483b-858d-b9b42e004277 00:31:08.525 00:56:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:08.786 00:56:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:09.047 00:56:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:09.047 00:56:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:09.047 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:09.047 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:09.047 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:09.047 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:09.047 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:09.047 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:09.048 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:09.327 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:09.327 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:09.327 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:09.327 00:56:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:09.591 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:09.591 fio-3.35 00:31:09.591 Starting 1 thread 00:31:09.591 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.137 00:31:12.137 test: (groupid=0, jobs=1): err= 0: pid=595164: Sat Jun 8 00:56:30 2024 00:31:12.137 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(82.0MiB/2006msec) 00:31:12.137 slat (usec): min=2, max=111, avg= 2.26, stdev= 1.02 00:31:12.137 clat (usec): min=3141, max=10938, avg=6772.13, stdev=552.04 00:31:12.137 lat (usec): min=3157, max=10940, avg=6774.39, stdev=551.99 00:31:12.137 clat percentiles (usec): 00:31:12.137 | 1.00th=[ 5538], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:31:12.137 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:31:12.137 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7635], 00:31:12.137 | 99.00th=[ 8029], 99.50th=[ 8356], 99.90th=[ 9503], 99.95th=[ 9896], 00:31:12.137 | 99.99th=[10552] 00:31:12.137 bw ( KiB/s): min=40928, max=42368, per=99.97%, avg=41852.00, stdev=675.56, samples=4 00:31:12.137 iops : min=10232, max=10592, avg=10463.00, stdev=168.89, samples=4 00:31:12.137 write: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(82.0MiB/2006msec); 0 zone resets 00:31:12.137 slat (nsec): min=2170, max=95328, avg=2357.13, stdev=702.81 00:31:12.137 clat (usec): min=1116, max=10318, avg=5389.65, stdev=460.97 00:31:12.137 lat (usec): min=1124, max=10320, avg=5392.01, stdev=460.95 00:31:12.137 clat percentiles (usec): 00:31:12.138 | 1.00th=[ 4293], 5.00th=[ 4686], 10.00th=[ 4817], 20.00th=[ 5014], 00:31:12.138 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5407], 60.00th=[ 5473], 00:31:12.138 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5932], 95.00th=[ 6128], 00:31:12.138 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 7439], 99.95th=[ 9503], 00:31:12.138 | 99.99th=[ 9896] 00:31:12.138 bw ( KiB/s): min=41472, max=42248, per=100.00%, avg=41888.00, stdev=318.46, samples=4 00:31:12.138 iops : min=10368, max=10562, avg=10472.00, stdev=79.62, samples=4 00:31:12.138 lat (msec) : 2=0.01%, 4=0.20%, 10=99.77%, 20=0.03% 00:31:12.138 cpu : usr=63.69%, sys=31.22%, ctx=32, majf=0, minf=5 00:31:12.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:12.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:12.138 issued rwts: total=20996,21001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.138 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:12.138 00:31:12.138 Run status group 0 (all jobs): 00:31:12.138 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=82.0MiB (86.0MB), run=2006-2006msec 00:31:12.138 WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=82.0MiB (86.0MB), run=2006-2006msec 00:31:12.138 00:56:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:12.138 00:56:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d908c875-2af6-4195-bd23-dbd20f78a111 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d908c875-2af6-4195-bd23-dbd20f78a111 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=d908c875-2af6-4195-bd23-dbd20f78a111 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:31:13.082 { 00:31:13.082 "uuid": "60ecc037-8862-458e-93e5-699a8874d922", 00:31:13.082 "name": "lvs_0", 00:31:13.082 "base_bdev": "Nvme0n1", 00:31:13.082 "total_data_clusters": 1787, 00:31:13.082 "free_clusters": 0, 00:31:13.082 "block_size": 512, 00:31:13.082 "cluster_size": 1073741824 00:31:13.082 }, 00:31:13.082 { 00:31:13.082 "uuid": "d908c875-2af6-4195-bd23-dbd20f78a111", 00:31:13.082 "name": "lvs_n_0", 00:31:13.082 "base_bdev": "6751bc7c-30cb-483b-858d-b9b42e004277", 00:31:13.082 "total_data_clusters": 457025, 00:31:13.082 "free_clusters": 457025, 00:31:13.082 "block_size": 512, 00:31:13.082 "cluster_size": 4194304 00:31:13.082 } 00:31:13.082 ]' 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="d908c875-2af6-4195-bd23-dbd20f78a111") .free_clusters' 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=457025 00:31:13.082 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d908c875-2af6-4195-bd23-dbd20f78a111") .cluster_size' 00:31:13.342 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=4194304 00:31:13.342 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1828100 00:31:13.342 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1828100 00:31:13.342 1828100 00:31:13.342 00:56:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:31:14.284 6ef9d939-c43f-4909-a585-9ea675e6e4dd 00:31:14.284 00:56:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:14.544 00:56:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:14.544 00:56:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.805 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:14.806 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:14.806 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:14.806 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:14.806 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:14.806 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:14.806 00:56:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:15.065 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:15.065 fio-3.35 00:31:15.065 Starting 1 thread 00:31:15.065 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.611 00:31:17.611 test: (groupid=0, jobs=1): err= 0: pid=596341: Sat Jun 8 00:56:35 2024 00:31:17.611 read: IOPS=9263, BW=36.2MiB/s (37.9MB/s)(72.6MiB/2006msec) 00:31:17.611 slat (usec): min=2, max=107, avg= 2.27, stdev= 1.07 00:31:17.611 clat (usec): min=1651, max=13178, avg=7634.64, stdev=612.87 00:31:17.611 lat (usec): min=1664, max=13181, avg=7636.90, stdev=612.81 00:31:17.611 clat percentiles (usec): 00:31:17.611 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6915], 20.00th=[ 7177], 00:31:17.611 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:31:17.611 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:31:17.611 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11469], 99.95th=[12387], 00:31:17.611 | 99.99th=[13173] 00:31:17.611 bw ( KiB/s): min=35808, max=37712, per=99.91%, avg=37022.00, stdev=834.26, samples=4 00:31:17.611 iops : min= 8952, max= 9428, avg=9255.50, stdev=208.57, samples=4 00:31:17.611 write: IOPS=9268, BW=36.2MiB/s (38.0MB/s)(72.6MiB/2006msec); 0 zone resets 00:31:17.611 slat (nsec): min=2175, max=95206, avg=2373.16, stdev=761.73 00:31:17.611 clat (usec): min=1389, max=10970, avg=6076.38, stdev=520.44 00:31:17.611 lat (usec): min=1397, max=10972, avg=6078.75, stdev=520.41 00:31:17.611 clat percentiles (usec): 00:31:17.611 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:31:17.611 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:31:17.611 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6718], 95.00th=[ 6849], 00:31:17.611 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 9241], 99.95th=[10290], 00:31:17.611 | 99.99th=[10945] 00:31:17.611 bw ( KiB/s): min=36688, max=37384, per=100.00%, avg=37076.00, stdev=329.36, samples=4 00:31:17.611 iops : min= 9172, max= 9346, avg=9269.00, stdev=82.34, samples=4 00:31:17.611 lat (msec) : 2=0.02%, 4=0.08%, 10=99.77%, 20=0.13% 00:31:17.611 cpu : usr=64.04%, sys=31.62%, ctx=38, majf=0, minf=5 00:31:17.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:17.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:17.611 issued rwts: total=18583,18593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:17.611 00:31:17.611 Run status group 0 (all jobs): 00:31:17.611 READ: bw=36.2MiB/s (37.9MB/s), 36.2MiB/s-36.2MiB/s (37.9MB/s-37.9MB/s), io=72.6MiB (76.1MB), run=2006-2006msec 00:31:17.611 WRITE: bw=36.2MiB/s (38.0MB/s), 36.2MiB/s-36.2MiB/s (38.0MB/s-38.0MB/s), io=72.6MiB (76.2MB), run=2006-2006msec 00:31:17.611 00:56:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:17.611 00:56:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:17.611 00:56:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:20.156 00:56:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:20.156 00:56:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:20.416 00:56:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:20.677 00:56:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:22.629 rmmod nvme_tcp 00:31:22.629 rmmod nvme_fabrics 00:31:22.629 rmmod nvme_keyring 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 592609 ']' 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 592609 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 592609 ']' 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 592609 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 592609 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 592609' 00:31:22.629 killing process with pid 592609 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 592609 00:31:22.629 00:56:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 592609 00:31:22.891 00:56:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:22.891 00:56:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:22.891 00:56:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:22.891 00:56:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:22.891 00:56:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:22.891 00:56:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.891 00:56:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.891 00:56:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.438 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:25.438 00:31:25.438 real 0m32.537s 00:31:25.438 user 2m46.699s 00:31:25.438 sys 0m9.878s 00:31:25.438 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:25.438 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.438 ************************************ 00:31:25.438 END TEST nvmf_fio_host 00:31:25.438 ************************************ 00:31:25.438 00:56:43 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:25.438 00:56:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:25.438 00:56:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:25.438 00:56:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:25.438 ************************************ 00:31:25.438 START TEST nvmf_failover 00:31:25.438 ************************************ 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:25.438 * Looking for test storage... 00:31:25.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.438 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:31:25.439 00:56:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:32.028 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:32.028 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:32.028 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:32.028 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.028 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:32.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:31:32.290 00:31:32.290 --- 10.0.0.2 ping statistics --- 00:31:32.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.290 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:31:32.290 00:31:32.290 --- 10.0.0.1 ping statistics --- 00:31:32.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.290 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=601752 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 601752 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 601752 ']' 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:32.290 00:56:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:32.290 [2024-06-08 00:56:50.471872] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:31:32.290 [2024-06-08 00:56:50.471959] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.290 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.290 [2024-06-08 00:56:50.559779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:32.552 [2024-06-08 00:56:50.654273] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.552 [2024-06-08 00:56:50.654330] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.552 [2024-06-08 00:56:50.654338] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.552 [2024-06-08 00:56:50.654345] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.552 [2024-06-08 00:56:50.654351] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.552 [2024-06-08 00:56:50.654689] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.552 [2024-06-08 00:56:50.654829] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.552 [2024-06-08 00:56:50.654829] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:33.123 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:33.123 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:31:33.123 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:33.123 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:33.123 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:33.123 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:33.123 00:56:51 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:33.383 [2024-06-08 00:56:51.428147] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.383 00:56:51 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:33.383 Malloc0 00:31:33.383 00:56:51 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:33.644 00:56:51 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:33.904 00:56:51 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.904 [2024-06-08 00:56:52.078692] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.905 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:34.165 [2024-06-08 00:56:52.247132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:34.165 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:34.165 [2024-06-08 00:56:52.415691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:34.165 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=602239 00:31:34.165 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:34.165 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:34.165 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 602239 /var/tmp/bdevperf.sock 00:31:34.165 00:56:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 602239 ']' 00:31:34.165 00:56:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:34.425 00:56:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:34.425 00:56:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:34.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:34.425 00:56:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:34.425 00:56:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:35.002 00:56:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:35.002 00:56:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:31:35.002 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:35.575 NVMe0n1 00:31:35.575 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:35.575 00:31:35.575 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=602427 00:31:35.575 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:35.575 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:36.962 00:56:54 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.962 [2024-06-08 00:56:54.964747] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964787] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964803] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964808] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964812] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964821] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964825] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964840] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964844] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964853] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964862] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964866] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964879] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964883] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.962 [2024-06-08 00:56:54.964888] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964905] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964920] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964924] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964928] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964933] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964937] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964946] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964950] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964964] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964977] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964986] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.964999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965024] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965042] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965051] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 [2024-06-08 00:56:54.965059] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020e0 is same with the state(5) to be set 00:31:36.963 00:56:54 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:40.265 00:56:57 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:40.265 00:31:40.265 00:56:58 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:40.265 [2024-06-08 00:56:58.446199] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.265 [2024-06-08 00:56:58.446241] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.265 [2024-06-08 00:56:58.446247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.265 [2024-06-08 00:56:58.446251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.265 [2024-06-08 00:56:58.446256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.265 [2024-06-08 00:56:58.446260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.265 [2024-06-08 00:56:58.446265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446295] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446299] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446304] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446308] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446312] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446321] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446325] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446329] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 [2024-06-08 00:56:58.446333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302d40 is same with the state(5) to be set 00:31:40.266 00:56:58 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:43.567 00:57:01 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.567 [2024-06-08 00:57:01.621421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.567 00:57:01 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:44.511 00:57:02 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:44.511 [2024-06-08 00:57:02.782919] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782953] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782964] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782973] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782978] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782982] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782986] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782995] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.782999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783025] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783042] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783059] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783079] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783083] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783105] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783114] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783127] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783132] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783136] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783140] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783145] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783149] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783158] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783167] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783194] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783211] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783220] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783230] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783238] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783283] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783310] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783315] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.511 [2024-06-08 00:57:02.783320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783324] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783342] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783356] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783360] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783370] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783374] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783405] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783422] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783426] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783431] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783435] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783439] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783445] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783449] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783453] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783458] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783462] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783466] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783470] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783479] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.512 [2024-06-08 00:57:02.783487] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159b80 is same with the state(5) to be set 00:31:44.773 00:57:02 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 602427 00:31:51.382 0 00:31:51.382 00:57:08 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 602239 00:31:51.382 00:57:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 602239 ']' 00:31:51.382 00:57:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 602239 00:31:51.382 00:57:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:31:51.382 00:57:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:51.382 00:57:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 602239 00:31:51.382 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:51.382 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:51.382 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 602239' 00:31:51.382 killing process with pid 602239 00:31:51.382 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 602239 00:31:51.382 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 602239 00:31:51.382 00:57:09 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:51.382 [2024-06-08 00:56:52.490046] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:31:51.382 [2024-06-08 00:56:52.490103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602239 ] 00:31:51.382 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.382 [2024-06-08 00:56:52.548983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.382 [2024-06-08 00:56:52.613210] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.382 Running I/O for 15 seconds... 00:31:51.382 [2024-06-08 00:56:54.966318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.382 [2024-06-08 00:56:54.966763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.382 [2024-06-08 00:56:54.966770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.966984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.966993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.383 [2024-06-08 00:56:54.967421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.383 [2024-06-08 00:56:54.967429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.967984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.967995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.968002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.968011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.968018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.968027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.968034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.968043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.968050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.968059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.968066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.384 [2024-06-08 00:56:54.968075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.384 [2024-06-08 00:56:54.968082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:54.968443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.385 [2024-06-08 00:56:54.968471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.385 [2024-06-08 00:56:54.968478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:31:51.385 [2024-06-08 00:56:54.968487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968522] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a76d80 was disconnected and freed. reset controller. 00:31:51.385 [2024-06-08 00:56:54.968532] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:51.385 [2024-06-08 00:56:54.968551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.385 [2024-06-08 00:56:54.968559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.385 [2024-06-08 00:56:54.968574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.385 [2024-06-08 00:56:54.968589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.385 [2024-06-08 00:56:54.968604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:54.968611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:51.385 [2024-06-08 00:56:54.972154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:51.385 [2024-06-08 00:56:54.972178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a57cf0 (9): Bad file descriptor 00:31:51.385 [2024-06-08 00:56:55.017298] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:51.385 [2024-06-08 00:56:58.447089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:58.447125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:58.447141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:58.447149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:58.447164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:58.447171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:58.447181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:58.447188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:58.447198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:58.447205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:58.447214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:58.447221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:58.447230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:58.447237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:58.447246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:58.447253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:58.447263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:58.447270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.385 [2024-06-08 00:56:58.447279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.385 [2024-06-08 00:56:58.447286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.386 [2024-06-08 00:56:58.447523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.386 [2024-06-08 00:56:58.447766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.386 [2024-06-08 00:56:58.447775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.447985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.447994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.448011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.448026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.448042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.448058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.448074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.448090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.448106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.448123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.448139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.387 [2024-06-08 00:56:58.448155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.387 [2024-06-08 00:56:58.448387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.387 [2024-06-08 00:56:58.448396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.388 [2024-06-08 00:56:58.448681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.448988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.448997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.388 [2024-06-08 00:56:58.449004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.388 [2024-06-08 00:56:58.449012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.389 [2024-06-08 00:56:58.449199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.389 [2024-06-08 00:56:58.449227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.389 [2024-06-08 00:56:58.449233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20752 len:8 PRP1 0x0 PRP2 0x0 00:31:51.389 [2024-06-08 00:56:58.449243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449277] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a78e00 was disconnected and freed. reset controller. 00:31:51.389 [2024-06-08 00:56:58.449287] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:51.389 [2024-06-08 00:56:58.449304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.389 [2024-06-08 00:56:58.449312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.389 [2024-06-08 00:56:58.449328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.389 [2024-06-08 00:56:58.449343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.389 [2024-06-08 00:56:58.449357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:56:58.449365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:51.389 [2024-06-08 00:56:58.449388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a57cf0 (9): Bad file descriptor 00:31:51.389 [2024-06-08 00:56:58.452918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:51.389 [2024-06-08 00:56:58.492606] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:51.389 [2024-06-08 00:57:02.785312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.389 [2024-06-08 00:57:02.785644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.389 [2024-06-08 00:57:02.785650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.785986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.785993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.390 [2024-06-08 00:57:02.786155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.390 [2024-06-08 00:57:02.786171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.390 [2024-06-08 00:57:02.786186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.390 [2024-06-08 00:57:02.786202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.390 [2024-06-08 00:57:02.786266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.390 [2024-06-08 00:57:02.786276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.390 [2024-06-08 00:57:02.786283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.391 [2024-06-08 00:57:02.786472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.391 [2024-06-08 00:57:02.786483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.392 [2024-06-08 00:57:02.786867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.392 [2024-06-08 00:57:02.786876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.393 [2024-06-08 00:57:02.786883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.786894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.393 [2024-06-08 00:57:02.786901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.786922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.786929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37152 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.786937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.786948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.786953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.786959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37160 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.786967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.786974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.786980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.786986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37168 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.786993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37176 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37184 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37192 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37200 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37208 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37216 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37224 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37232 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37240 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37248 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37256 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37264 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37272 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37280 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37288 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37296 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37304 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37312 len:8 PRP1 0x0 PRP2 0x0 00:31:51.393 [2024-06-08 00:57:02.787468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.393 [2024-06-08 00:57:02.787475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.393 [2024-06-08 00:57:02.787481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.393 [2024-06-08 00:57:02.787487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37320 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.787494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.787501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.787507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37328 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.800659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37336 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.800686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37344 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.800712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37352 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.800738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36752 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.800764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36760 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.800790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36768 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.800817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36776 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.800845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36784 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.800873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36792 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:51.394 [2024-06-08 00:57:02.800899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:51.394 [2024-06-08 00:57:02.800905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36800 len:8 PRP1 0x0 PRP2 0x0 00:31:51.394 [2024-06-08 00:57:02.800911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.800951] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c21aa0 was disconnected and freed. reset controller. 00:31:51.394 [2024-06-08 00:57:02.800960] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:51.394 [2024-06-08 00:57:02.800987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.394 [2024-06-08 00:57:02.800996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.801005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.394 [2024-06-08 00:57:02.801012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.801020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.394 [2024-06-08 00:57:02.801027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.801035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:51.394 [2024-06-08 00:57:02.801042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.394 [2024-06-08 00:57:02.801049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:51.394 [2024-06-08 00:57:02.801088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a57cf0 (9): Bad file descriptor 00:31:51.394 [2024-06-08 00:57:02.804644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:51.394 [2024-06-08 00:57:02.842974] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:51.394 00:31:51.394 Latency(us) 00:31:51.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.394 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:51.394 Verification LBA range: start 0x0 length 0x4000 00:31:51.394 NVMe0n1 : 15.00 11333.42 44.27 272.24 0.00 11001.46 815.79 23156.05 00:31:51.394 =================================================================================================================== 00:31:51.394 Total : 11333.42 44.27 272.24 0.00 11001.46 815.79 23156.05 00:31:51.394 Received shutdown signal, test time was about 15.000000 seconds 00:31:51.394 00:31:51.394 Latency(us) 00:31:51.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.394 =================================================================================================================== 00:31:51.394 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=605568 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 605568 /var/tmp/bdevperf.sock 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 605568 ']' 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:51.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:51.394 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:51.964 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:51.964 00:57:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:31:51.964 00:57:09 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:51.964 [2024-06-08 00:57:10.121345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:51.964 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:52.224 [2024-06-08 00:57:10.281749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:52.224 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:52.485 NVMe0n1 00:31:52.485 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:52.745 00:31:52.745 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:53.006 00:31:53.006 00:57:11 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:53.006 00:57:11 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:53.267 00:57:11 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:53.267 00:57:11 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:56.565 00:57:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:56.565 00:57:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:56.565 00:57:14 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:56.565 00:57:14 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=606967 00:31:56.565 00:57:14 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 606967 00:31:57.507 0 00:31:57.507 00:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:57.507 [2024-06-08 00:57:09.206223] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:31:57.507 [2024-06-08 00:57:09.206280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605568 ] 00:31:57.507 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.507 [2024-06-08 00:57:09.265292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.507 [2024-06-08 00:57:09.328038] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.507 [2024-06-08 00:57:11.442991] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:57.507 [2024-06-08 00:57:11.443038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.507 [2024-06-08 00:57:11.443049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.507 [2024-06-08 00:57:11.443058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.507 [2024-06-08 00:57:11.443065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.507 [2024-06-08 00:57:11.443073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.507 [2024-06-08 00:57:11.443080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.507 [2024-06-08 00:57:11.443088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.507 [2024-06-08 00:57:11.443095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.507 [2024-06-08 00:57:11.443102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:57.507 [2024-06-08 00:57:11.443127] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:57.507 [2024-06-08 00:57:11.443140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ccf0 (9): Bad file descriptor 00:31:57.507 [2024-06-08 00:57:11.490960] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:57.507 Running I/O for 1 seconds... 00:31:57.507 00:31:57.507 Latency(us) 00:31:57.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.507 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:57.507 Verification LBA range: start 0x0 length 0x4000 00:31:57.507 NVMe0n1 : 1.01 11529.12 45.04 0.00 0.00 11045.48 2621.44 11359.57 00:31:57.507 =================================================================================================================== 00:31:57.507 Total : 11529.12 45.04 0.00 0.00 11045.48 2621.44 11359.57 00:31:57.507 00:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:57.507 00:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:57.768 00:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:58.029 00:57:16 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:58.029 00:57:16 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:58.029 00:57:16 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:58.290 00:57:16 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 605568 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 605568 ']' 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 605568 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 605568 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 605568' 00:32:01.593 killing process with pid 605568 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 605568 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 605568 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:01.593 00:57:19 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:01.853 00:57:19 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:01.853 00:57:19 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:01.853 00:57:19 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:01.853 00:57:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:01.853 00:57:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:01.853 00:57:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:01.853 00:57:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:01.853 00:57:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:01.853 00:57:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:01.853 rmmod nvme_tcp 00:32:01.853 rmmod nvme_fabrics 00:32:01.853 rmmod nvme_keyring 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 601752 ']' 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 601752 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 601752 ']' 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 601752 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 601752 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 601752' 00:32:01.853 killing process with pid 601752 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 601752 00:32:01.853 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 601752 00:32:02.114 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:02.114 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:02.114 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:02.114 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:02.114 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:02.114 00:57:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.114 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:02.114 00:57:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.029 00:57:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:04.029 00:32:04.029 real 0m39.120s 00:32:04.029 user 2m0.069s 00:32:04.029 sys 0m8.305s 00:32:04.029 00:57:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:04.029 00:57:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:04.029 ************************************ 00:32:04.029 END TEST nvmf_failover 00:32:04.029 ************************************ 00:32:04.291 00:57:22 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:04.291 00:57:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:04.291 00:57:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:04.291 00:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:04.291 ************************************ 00:32:04.291 START TEST nvmf_host_discovery 00:32:04.291 ************************************ 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:04.291 * Looking for test storage... 00:32:04.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.291 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:32:04.292 00:57:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:12.440 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:12.440 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:12.440 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:12.440 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.440 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:12.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:32:12.440 00:32:12.440 --- 10.0.0.2 ping statistics --- 00:32:12.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.440 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:32:12.441 00:32:12.441 --- 10.0.0.1 ping statistics --- 00:32:12.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.441 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=612160 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 612160 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 612160 ']' 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:12.441 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.441 [2024-06-08 00:57:29.795782] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:32:12.441 [2024-06-08 00:57:29.795853] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.441 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.441 [2024-06-08 00:57:29.882190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.441 [2024-06-08 00:57:29.975087] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.441 [2024-06-08 00:57:29.975144] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.441 [2024-06-08 00:57:29.975152] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.441 [2024-06-08 00:57:29.975159] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.441 [2024-06-08 00:57:29.975165] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.441 [2024-06-08 00:57:29.975191] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.441 [2024-06-08 00:57:30.622976] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.441 [2024-06-08 00:57:30.635244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.441 null0 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.441 null1 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=612320 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 612320 /tmp/host.sock 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 612320 ']' 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:12.441 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:12.441 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.703 [2024-06-08 00:57:30.727981] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:32:12.703 [2024-06-08 00:57:30.728040] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612320 ] 00:32:12.703 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.703 [2024-06-08 00:57:30.791429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.703 [2024-06-08 00:57:30.865578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.275 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.535 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.796 [2024-06-08 00:57:31.858318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:13.796 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.796 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.797 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.057 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:32:14.057 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:32:14.317 [2024-06-08 00:57:32.520568] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:14.317 [2024-06-08 00:57:32.520594] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:14.317 [2024-06-08 00:57:32.520608] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:14.578 [2024-06-08 00:57:32.647028] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:14.839 [2024-06-08 00:57:32.867254] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:14.839 [2024-06-08 00:57:32.867278] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:14.839 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:14.839 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:14.839 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:32:14.839 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:14.839 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:14.839 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.839 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:14.839 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.839 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:14.839 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.100 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.364 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.365 [2024-06-08 00:57:33.450436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:15.365 [2024-06-08 00:57:33.450602] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:15.365 [2024-06-08 00:57:33.450632] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:15.365 [2024-06-08 00:57:33.539218] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.365 [2024-06-08 00:57:33.596987] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:15.365 [2024-06-08 00:57:33.597009] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:15.365 [2024-06-08 00:57:33.597015] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:15.365 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:16.382 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.644 [2024-06-08 00:57:34.718351] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:16.644 [2024-06-08 00:57:34.718375] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:16.644 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:16.644 [2024-06-08 00:57:34.725587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.644 [2024-06-08 00:57:34.725606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.644 [2024-06-08 00:57:34.725615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.645 [2024-06-08 00:57:34.725622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.645 [2024-06-08 00:57:34.725630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.645 [2024-06-08 00:57:34.725637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.645 [2024-06-08 00:57:34.725645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.645 [2024-06-08 00:57:34.725652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.645 [2024-06-08 00:57:34.725659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361c0 is same with the state(5) to be set 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.645 [2024-06-08 00:57:34.735600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19361c0 (9): Bad file descriptor 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.645 [2024-06-08 00:57:34.745641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:16.645 [2024-06-08 00:57:34.746053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.645 [2024-06-08 00:57:34.746067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19361c0 with addr=10.0.0.2, port=4420 00:32:16.645 [2024-06-08 00:57:34.746075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361c0 is same with the state(5) to be set 00:32:16.645 [2024-06-08 00:57:34.746087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19361c0 (9): Bad file descriptor 00:32:16.645 [2024-06-08 00:57:34.746104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:16.645 [2024-06-08 00:57:34.746111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:16.645 [2024-06-08 00:57:34.746119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:16.645 [2024-06-08 00:57:34.746131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.645 [2024-06-08 00:57:34.755695] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:16.645 [2024-06-08 00:57:34.756070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.645 [2024-06-08 00:57:34.756082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19361c0 with addr=10.0.0.2, port=4420 00:32:16.645 [2024-06-08 00:57:34.756090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361c0 is same with the state(5) to be set 00:32:16.645 [2024-06-08 00:57:34.756100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19361c0 (9): Bad file descriptor 00:32:16.645 [2024-06-08 00:57:34.756110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:16.645 [2024-06-08 00:57:34.756117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:16.645 [2024-06-08 00:57:34.756124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:16.645 [2024-06-08 00:57:34.756134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.645 [2024-06-08 00:57:34.765749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:16.645 [2024-06-08 00:57:34.765966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.645 [2024-06-08 00:57:34.765980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19361c0 with addr=10.0.0.2, port=4420 00:32:16.645 [2024-06-08 00:57:34.765987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361c0 is same with the state(5) to be set 00:32:16.645 [2024-06-08 00:57:34.765998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19361c0 (9): Bad file descriptor 00:32:16.645 [2024-06-08 00:57:34.766009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:16.645 [2024-06-08 00:57:34.766016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:16.645 [2024-06-08 00:57:34.766023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:16.645 [2024-06-08 00:57:34.766034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.645 [2024-06-08 00:57:34.775807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:16.645 [2024-06-08 00:57:34.776192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.645 [2024-06-08 00:57:34.776204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19361c0 with addr=10.0.0.2, port=4420 00:32:16.645 [2024-06-08 00:57:34.776211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361c0 is same with the state(5) to be set 00:32:16.645 [2024-06-08 00:57:34.776222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19361c0 (9): Bad file descriptor 00:32:16.645 [2024-06-08 00:57:34.776232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:16.645 [2024-06-08 00:57:34.776238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:16.645 [2024-06-08 00:57:34.776245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:16.645 [2024-06-08 00:57:34.776256] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.645 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.645 [2024-06-08 00:57:34.785862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:16.645 [2024-06-08 00:57:34.786148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.645 [2024-06-08 00:57:34.786159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19361c0 with addr=10.0.0.2, port=4420 00:32:16.645 [2024-06-08 00:57:34.786166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361c0 is same with the state(5) to be set 00:32:16.645 [2024-06-08 00:57:34.786177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19361c0 (9): Bad file descriptor 00:32:16.645 [2024-06-08 00:57:34.786187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:16.645 [2024-06-08 00:57:34.786193] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:16.645 [2024-06-08 00:57:34.786200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:16.645 [2024-06-08 00:57:34.786211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.645 [2024-06-08 00:57:34.795913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:16.645 [2024-06-08 00:57:34.796203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.645 [2024-06-08 00:57:34.796215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19361c0 with addr=10.0.0.2, port=4420 00:32:16.645 [2024-06-08 00:57:34.796226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361c0 is same with the state(5) to be set 00:32:16.645 [2024-06-08 00:57:34.796238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19361c0 (9): Bad file descriptor 00:32:16.645 [2024-06-08 00:57:34.796254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:16.645 [2024-06-08 00:57:34.796261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:16.645 [2024-06-08 00:57:34.796268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:16.645 [2024-06-08 00:57:34.796278] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.645 [2024-06-08 00:57:34.805967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:16.645 [2024-06-08 00:57:34.806354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.645 [2024-06-08 00:57:34.806366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19361c0 with addr=10.0.0.2, port=4420 00:32:16.645 [2024-06-08 00:57:34.806373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19361c0 is same with the state(5) to be set 00:32:16.646 [2024-06-08 00:57:34.806384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19361c0 (9): Bad file descriptor 00:32:16.646 [2024-06-08 00:57:34.806422] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:16.646 [2024-06-08 00:57:34.806438] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:16.646 [2024-06-08 00:57:34.806456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:16.646 [2024-06-08 00:57:34.806464] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:16.646 [2024-06-08 00:57:34.806471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:16.646 [2024-06-08 00:57:34.806484] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.646 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.907 00:57:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:16.907 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.908 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.291 [2024-06-08 00:57:36.152597] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:18.291 [2024-06-08 00:57:36.152615] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:18.291 [2024-06-08 00:57:36.152628] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:18.291 [2024-06-08 00:57:36.241930] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:18.291 [2024-06-08 00:57:36.345034] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:18.291 [2024-06-08 00:57:36.345064] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.291 request: 00:32:18.291 { 00:32:18.291 "name": "nvme", 00:32:18.291 "trtype": "tcp", 00:32:18.291 "traddr": "10.0.0.2", 00:32:18.291 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:18.291 "adrfam": "ipv4", 00:32:18.291 "trsvcid": "8009", 00:32:18.291 "wait_for_attach": true, 00:32:18.291 "method": "bdev_nvme_start_discovery", 00:32:18.291 "req_id": 1 00:32:18.291 } 00:32:18.291 Got JSON-RPC error response 00:32:18.291 response: 00:32:18.291 { 00:32:18.291 "code": -17, 00:32:18.291 "message": "File exists" 00:32:18.291 } 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.291 request: 00:32:18.291 { 00:32:18.291 "name": "nvme_second", 00:32:18.291 "trtype": "tcp", 00:32:18.291 "traddr": "10.0.0.2", 00:32:18.291 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:18.291 "adrfam": "ipv4", 00:32:18.291 "trsvcid": "8009", 00:32:18.291 "wait_for_attach": true, 00:32:18.291 "method": "bdev_nvme_start_discovery", 00:32:18.291 "req_id": 1 00:32:18.291 } 00:32:18.291 Got JSON-RPC error response 00:32:18.291 response: 00:32:18.291 { 00:32:18.291 "code": -17, 00:32:18.291 "message": "File exists" 00:32:18.291 } 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:18.291 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.551 00:57:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.492 [2024-06-08 00:57:37.612701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.492 [2024-06-08 00:57:37.612741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1932230 with addr=10.0.0.2, port=8010 00:32:19.492 [2024-06-08 00:57:37.612755] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:19.492 [2024-06-08 00:57:37.612763] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:19.492 [2024-06-08 00:57:37.612770] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:20.434 [2024-06-08 00:57:38.614998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.434 [2024-06-08 00:57:38.615021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1932230 with addr=10.0.0.2, port=8010 00:32:20.434 [2024-06-08 00:57:38.615032] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:20.434 [2024-06-08 00:57:38.615038] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:20.434 [2024-06-08 00:57:38.615045] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:21.377 [2024-06-08 00:57:39.616903] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:21.377 request: 00:32:21.377 { 00:32:21.377 "name": "nvme_second", 00:32:21.377 "trtype": "tcp", 00:32:21.377 "traddr": "10.0.0.2", 00:32:21.377 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:21.377 "adrfam": "ipv4", 00:32:21.377 "trsvcid": "8010", 00:32:21.377 "attach_timeout_ms": 3000, 00:32:21.377 "method": "bdev_nvme_start_discovery", 00:32:21.377 "req_id": 1 00:32:21.377 } 00:32:21.377 Got JSON-RPC error response 00:32:21.377 response: 00:32:21.377 { 00:32:21.377 "code": -110, 00:32:21.377 "message": "Connection timed out" 00:32:21.377 } 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:21.377 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 612320 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:21.638 rmmod nvme_tcp 00:32:21.638 rmmod nvme_fabrics 00:32:21.638 rmmod nvme_keyring 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 612160 ']' 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 612160 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 612160 ']' 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 612160 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 612160 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 612160' 00:32:21.638 killing process with pid 612160 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 612160 00:32:21.638 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 612160 00:32:21.900 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:21.900 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:21.900 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:21.900 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:21.900 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:21.900 00:57:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.900 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.900 00:57:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.817 00:57:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:23.817 00:32:23.817 real 0m19.612s 00:32:23.817 user 0m22.975s 00:32:23.817 sys 0m6.749s 00:32:23.817 00:57:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:23.817 00:57:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.817 ************************************ 00:32:23.817 END TEST nvmf_host_discovery 00:32:23.817 ************************************ 00:32:23.817 00:57:42 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:23.817 00:57:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:23.817 00:57:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:23.817 00:57:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.817 ************************************ 00:32:23.817 START TEST nvmf_host_multipath_status 00:32:23.817 ************************************ 00:32:23.817 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:24.079 * Looking for test storage... 00:32:24.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:24.079 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:24.080 00:57:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.672 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:30.933 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:30.933 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.933 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:30.934 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:30.934 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.934 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.934 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.934 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.934 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:30.934 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:31.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:32:31.195 00:32:31.195 --- 10.0.0.2 ping statistics --- 00:32:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.195 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:32:31.195 00:32:31.195 --- 10.0.0.1 ping statistics --- 00:32:31.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.195 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=618451 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 618451 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 618451 ']' 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:31.195 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:31.195 [2024-06-08 00:57:49.363205] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:32:31.195 [2024-06-08 00:57:49.363268] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.195 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.195 [2024-06-08 00:57:49.433678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:31.457 [2024-06-08 00:57:49.507769] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.457 [2024-06-08 00:57:49.507805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.457 [2024-06-08 00:57:49.507816] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.457 [2024-06-08 00:57:49.507823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.457 [2024-06-08 00:57:49.507828] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.457 [2024-06-08 00:57:49.507968] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.457 [2024-06-08 00:57:49.507968] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.029 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:32.029 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:32:32.029 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:32.029 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:32.029 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:32.029 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.029 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=618451 00:32:32.029 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:32.029 [2024-06-08 00:57:50.303660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.289 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:32.289 Malloc0 00:32:32.289 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:32.549 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.549 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.810 [2024-06-08 00:57:50.926091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.810 00:57:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:32.810 [2024-06-08 00:57:51.078457] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:33.071 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=618814 00:32:33.071 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:33.071 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:33.071 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 618814 /var/tmp/bdevperf.sock 00:32:33.071 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 618814 ']' 00:32:33.071 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:33.071 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:33.071 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:33.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:33.071 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:33.071 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:33.642 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:33.642 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:32:33.642 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:33.903 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:34.164 Nvme0n1 00:32:34.435 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:34.741 Nvme0n1 00:32:34.741 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:34.741 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:36.652 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:36.652 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:36.913 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:36.913 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:37.855 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:37.855 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:37.855 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.855 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:38.116 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.116 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:38.116 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.116 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:38.377 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:38.377 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:38.377 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.377 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:38.377 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.377 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:38.377 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.377 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:38.637 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.637 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:38.637 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.637 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:38.898 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.898 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:38.898 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.898 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:38.898 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.898 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:38.898 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:39.159 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:39.420 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:40.362 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:40.362 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:40.362 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.362 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:40.623 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:40.623 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:40.623 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.623 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:40.623 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.623 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:40.623 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.623 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:40.884 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.884 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:40.884 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.884 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:40.884 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.884 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:41.145 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.145 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:41.145 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.145 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:41.145 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.145 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:41.406 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.406 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:41.406 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:41.406 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:41.667 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:42.607 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:42.607 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:42.607 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.607 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:42.867 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.867 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:42.867 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.867 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:43.128 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:43.128 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:43.128 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.128 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:43.128 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.128 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:43.128 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.128 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:43.388 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.388 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:43.388 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.388 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:43.388 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.388 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:43.388 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.388 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:43.646 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.646 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:43.646 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:43.905 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:43.905 00:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.287 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:45.547 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.547 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:45.547 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.547 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:45.806 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.806 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:45.807 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.807 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:45.807 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.807 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:45.807 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.807 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:46.067 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:46.067 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:46.067 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:46.327 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:46.327 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:47.268 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:47.268 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:47.268 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.268 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:47.529 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.529 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:47.529 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:47.529 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.790 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.790 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:47.790 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.790 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:47.790 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.790 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:47.790 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.790 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:48.051 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.051 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:48.051 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.051 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:48.311 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:48.311 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:48.311 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.311 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:48.311 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:48.311 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:48.311 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:48.572 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:48.572 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:49.957 00:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:49.957 00:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:49.957 00:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.957 00:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:49.957 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:49.957 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:49.957 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.957 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:49.957 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.957 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:49.957 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.958 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:50.218 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.218 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:50.218 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.218 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:50.479 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.479 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:50.479 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.479 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:50.479 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:50.479 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:50.479 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.479 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:50.783 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.783 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:50.783 00:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:50.784 00:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:51.057 00:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:51.318 00:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:52.261 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:52.261 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:52.261 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.261 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:52.522 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.522 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:52.522 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.522 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:52.522 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.522 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:52.522 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.522 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:52.782 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.782 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:52.783 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.783 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:53.043 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.043 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:53.043 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.043 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:53.043 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.043 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:53.043 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:53.043 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.304 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.304 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:53.304 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:53.564 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:53.564 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:54.504 00:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:54.504 00:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:54.765 00:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.765 00:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:54.765 00:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:54.765 00:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:54.765 00:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.765 00:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:55.024 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.024 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:55.024 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.024 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:55.024 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.024 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:55.024 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:55.024 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.283 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.283 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:55.283 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.283 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:55.543 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.543 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:55.543 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.543 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:55.543 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.543 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:55.543 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:55.804 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:56.065 00:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:57.006 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:57.006 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:57.006 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.006 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:57.267 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.267 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:57.267 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.267 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:57.267 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.267 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:57.267 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.267 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:57.528 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.528 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:57.528 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.528 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:57.528 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.528 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:57.528 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.528 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:57.788 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.788 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:57.788 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.788 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:58.049 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.049 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:58.049 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:58.049 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:58.309 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:59.250 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:59.250 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:59.250 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.250 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:59.510 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.510 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:59.510 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.510 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.771 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:59.771 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.771 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.771 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:59.771 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.771 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:59.771 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.771 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:00.031 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.031 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:00.031 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.031 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 618814 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 618814 ']' 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 618814 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 618814 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 618814' 00:33:00.292 killing process with pid 618814 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 618814 00:33:00.292 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 618814 00:33:00.574 Connection closed with partial response: 00:33:00.574 00:33:00.574 00:33:00.574 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 618814 00:33:00.574 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:00.574 [2024-06-08 00:57:51.139153] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:33:00.574 [2024-06-08 00:57:51.139209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618814 ] 00:33:00.574 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.574 [2024-06-08 00:57:51.188945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.574 [2024-06-08 00:57:51.241092] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.574 Running I/O for 90 seconds... 00:33:00.574 [2024-06-08 00:58:04.328663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.328697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.328716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.328722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.328733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.328738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.328748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.328754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.328764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.328769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.328779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.328784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.328794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.328799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.574 [2024-06-08 00:58:04.329936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.574 [2024-06-08 00:58:04.329946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.329951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.329963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.329968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.329978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.329983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.329993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.329998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.575 [2024-06-08 00:58:04.330821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.575 [2024-06-08 00:58:04.330837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.575 [2024-06-08 00:58:04.330852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.575 [2024-06-08 00:58:04.330867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.575 [2024-06-08 00:58:04.330882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.575 [2024-06-08 00:58:04.330898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.575 [2024-06-08 00:58:04.330913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.330986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.330991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:00.575 [2024-06-08 00:58:04.331002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.575 [2024-06-08 00:58:04.331006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:00.576 [2024-06-08 00:58:04.331888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.576 [2024-06-08 00:58:04.331893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.331903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.331908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.331918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.331923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.331933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.331938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.331949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.331954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.331964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.331969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.331979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.331984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.331994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.331999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.577 [2024-06-08 00:58:04.332881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.577 [2024-06-08 00:58:04.332891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.332896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.332906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.332911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.333738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.333743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.334052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.334058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.334071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.334076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.334087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.334092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.334102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.334107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.334119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.578 [2024-06-08 00:58:04.334123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.578 [2024-06-08 00:58:04.334133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.579 [2024-06-08 00:58:04.334563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.579 [2024-06-08 00:58:04.334578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.579 [2024-06-08 00:58:04.334593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.579 [2024-06-08 00:58:04.334608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.579 [2024-06-08 00:58:04.334624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.579 [2024-06-08 00:58:04.334640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.579 [2024-06-08 00:58:04.334655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.334869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.334880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.345865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.345902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.345909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.345919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.345924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.345934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.345939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.345950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.345958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.345968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.345973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.345984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.345989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:00.579 [2024-06-08 00:58:04.346331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.579 [2024-06-08 00:58:04.346337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.580 [2024-06-08 00:58:04.346929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:00.580 [2024-06-08 00:58:04.346939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.346944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.346954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.346958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.346969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.346974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.346983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.346988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.346998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.581 [2024-06-08 00:58:04.347537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.581 [2024-06-08 00:58:04.347547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.582 [2024-06-08 00:58:04.347783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.582 [2024-06-08 00:58:04.347798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.582 [2024-06-08 00:58:04.347814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.582 [2024-06-08 00:58:04.347829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.582 [2024-06-08 00:58:04.347845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.582 [2024-06-08 00:58:04.347860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.582 [2024-06-08 00:58:04.347876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.347931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.347936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.348828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.348841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.348857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.348863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.348873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.348878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.348888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.348893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.348904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.348909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.348919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.348924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.348934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.348939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.348950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.348955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.350443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.350451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.350462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.350468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.350478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.350483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.350493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.350498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.350509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.350514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:00.582 [2024-06-08 00:58:04.350526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.582 [2024-06-08 00:58:04.350532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.350986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.350996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.583 [2024-06-08 00:58:04.351168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:00.583 [2024-06-08 00:58:04.351178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.351182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.358708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.358713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.584 [2024-06-08 00:58:04.359385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.584 [2024-06-08 00:58:04.359391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.585 [2024-06-08 00:58:04.359874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.585 [2024-06-08 00:58:04.359889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.585 [2024-06-08 00:58:04.359904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.585 [2024-06-08 00:58:04.359919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.585 [2024-06-08 00:58:04.359933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.585 [2024-06-08 00:58:04.359948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.585 [2024-06-08 00:58:04.359965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:00.585 [2024-06-08 00:58:04.359990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.585 [2024-06-08 00:58:04.359995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.360393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.360997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.586 [2024-06-08 00:58:04.361206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.586 [2024-06-08 00:58:04.361211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:00.587 [2024-06-08 00:58:04.361949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.587 [2024-06-08 00:58:04.361954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.361965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.361970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.361981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.361986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.362904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.362909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.588 [2024-06-08 00:58:04.363360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.588 [2024-06-08 00:58:04.363371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-06-08 00:58:04.363532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-06-08 00:58:04.363551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-06-08 00:58:04.363567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-06-08 00:58:04.363582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-06-08 00:58:04.363598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-06-08 00:58:04.363615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.589 [2024-06-08 00:58:04.363630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.363986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.363996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.364192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.364197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.368848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.368872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.368883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.368889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.368899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.368904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.368914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.368919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.368929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.368934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:00.589 [2024-06-08 00:58:04.368944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.589 [2024-06-08 00:58:04.368949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.368959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.368965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:00.590 [2024-06-08 00:58:04.369791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.590 [2024-06-08 00:58:04.369796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.369987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.369992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.591 [2024-06-08 00:58:04.370384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.591 [2024-06-08 00:58:04.370394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.370661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.592 [2024-06-08 00:58:04.370676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.592 [2024-06-08 00:58:04.370691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.592 [2024-06-08 00:58:04.370706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.592 [2024-06-08 00:58:04.370721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.592 [2024-06-08 00:58:04.370736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.592 [2024-06-08 00:58:04.370751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.370761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.592 [2024-06-08 00:58:04.370767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.371538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.371553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.371565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.371570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.371580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.371585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.371596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.371601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.371611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.371616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.371626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.371631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.371641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.371646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.371656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.592 [2024-06-08 00:58:04.371661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:00.592 [2024-06-08 00:58:04.371671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.371898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.371903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.593 [2024-06-08 00:58:04.373916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:00.593 [2024-06-08 00:58:04.373926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.373931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.373941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.373948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.373958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.373963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.373973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.373978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.373988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.373993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.594 [2024-06-08 00:58:04.374519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.594 [2024-06-08 00:58:04.374530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.374535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.374908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.374915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.374926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.374931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.374942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.374947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.374957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.374962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.374972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.374976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.374987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.374992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.595 [2024-06-08 00:58:04.375538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.595 [2024-06-08 00:58:04.375553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.595 [2024-06-08 00:58:04.375568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.595 [2024-06-08 00:58:04.375585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.595 [2024-06-08 00:58:04.375600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.595 [2024-06-08 00:58:04.375615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:00.595 [2024-06-08 00:58:04.375625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.595 [2024-06-08 00:58:04.375630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.375640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.596 [2024-06-08 00:58:04.375646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.375656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.375661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.375671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.375676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.375686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.375691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.375701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.375706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.375716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.375721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.375731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.375736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.375952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.375959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.375969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.375976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.375986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.375991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:00.596 [2024-06-08 00:58:04.376481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.596 [2024-06-08 00:58:04.376486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.376989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.376999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.597 [2024-06-08 00:58:04.377918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:00.597 [2024-06-08 00:58:04.377928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.377933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.377943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.377948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.377958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.377963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.377973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.377978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.377987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.377992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.378986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.378991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.379001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.379006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.379016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.379023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.379033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.379038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.379048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.379053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.379063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.379068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.379078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.379083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.379360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.379368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.598 [2024-06-08 00:58:04.379378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.598 [2024-06-08 00:58:04.379383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.379398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.379419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.379434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.379449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.379464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.379481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.379860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.379876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.379891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.599 [2024-06-08 00:58:04.379907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.599 [2024-06-08 00:58:04.379922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.599 [2024-06-08 00:58:04.379938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.599 [2024-06-08 00:58:04.379953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.599 [2024-06-08 00:58:04.379968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.599 [2024-06-08 00:58:04.379983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.379993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.599 [2024-06-08 00:58:04.379998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:00.599 [2024-06-08 00:58:04.380715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.599 [2024-06-08 00:58:04.380720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.380730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.380736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.380746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.380752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.380762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.380768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.380778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.380783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.380794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.380799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:00.600 [2024-06-08 00:58:04.381988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.600 [2024-06-08 00:58:04.381993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.382987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.382997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.601 [2024-06-08 00:58:04.383442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.601 [2024-06-08 00:58:04.383452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.383714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.383720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.602 [2024-06-08 00:58:04.384607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.602 [2024-06-08 00:58:04.384622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.602 [2024-06-08 00:58:04.384637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.602 [2024-06-08 00:58:04.384654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.602 [2024-06-08 00:58:04.384669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.602 [2024-06-08 00:58:04.384684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.602 [2024-06-08 00:58:04.384699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.384754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.384759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.386604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.386612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.386622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.386627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.386637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.386642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.386653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.602 [2024-06-08 00:58:04.386658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:00.602 [2024-06-08 00:58:04.386668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.386853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.386858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:00.603 [2024-06-08 00:58:04.387399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.603 [2024-06-08 00:58:04.387408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.387708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.387713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.604 [2024-06-08 00:58:04.388647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.604 [2024-06-08 00:58:04.388657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.388989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.388994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.605 [2024-06-08 00:58:04.389413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:00.605 [2024-06-08 00:58:04.389423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.606 [2024-06-08 00:58:04.389428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.606 [2024-06-08 00:58:04.389443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.606 [2024-06-08 00:58:04.389458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.606 [2024-06-08 00:58:04.389473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.606 [2024-06-08 00:58:04.389488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.606 [2024-06-08 00:58:04.389503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.606 [2024-06-08 00:58:04.389518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.389989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.389994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:00.606 [2024-06-08 00:58:04.390245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.606 [2024-06-08 00:58:04.390250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.390882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.390887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.607 [2024-06-08 00:58:04.391828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:00.607 [2024-06-08 00:58:04.391843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.391848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.391863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.391868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.391883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.391888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.392979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.392984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.393001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.393006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.393022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.393027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.393044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.393049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.393065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.393070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.393199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.393205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.393223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.393228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.393245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.393250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.393267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.393272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.393289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.608 [2024-06-08 00:58:04.393294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.608 [2024-06-08 00:58:04.393311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:04.393316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:04.393333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:04.393340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:04.393357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:04.393362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:04.393397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:04.393405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:04.393423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:04.393428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.609 [2024-06-08 00:58:16.460390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.609 [2024-06-08 00:58:16.460738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.609 [2024-06-08 00:58:16.460753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.609 [2024-06-08 00:58:16.460769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.609 [2024-06-08 00:58:16.460790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.609 [2024-06-08 00:58:16.460806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.460952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.460957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.461550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.461565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.461578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.461583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.461593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.461598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.461609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.461614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.461624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:00.609 [2024-06-08 00:58:16.461629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.461639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.609 [2024-06-08 00:58:16.461644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.461654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.609 [2024-06-08 00:58:16.461659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:00.609 [2024-06-08 00:58:16.461669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.610 [2024-06-08 00:58:16.461674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:00.610 [2024-06-08 00:58:16.461684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.610 [2024-06-08 00:58:16.461689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:00.610 [2024-06-08 00:58:16.461699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.610 [2024-06-08 00:58:16.461704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:00.610 [2024-06-08 00:58:16.461714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.610 [2024-06-08 00:58:16.461719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:00.610 [2024-06-08 00:58:16.461729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.610 [2024-06-08 00:58:16.461734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:00.610 [2024-06-08 00:58:16.461747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.610 [2024-06-08 00:58:16.461752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:00.610 Received shutdown signal, test time was about 25.666027 seconds 00:33:00.610 00:33:00.610 Latency(us) 00:33:00.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.610 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:00.610 Verification LBA range: start 0x0 length 0x4000 00:33:00.610 Nvme0n1 : 25.67 11113.13 43.41 0.00 0.00 11499.75 317.44 3075822.93 00:33:00.610 =================================================================================================================== 00:33:00.610 Total : 11113.13 43.41 0.00 0.00 11499.75 317.44 3075822.93 00:33:00.610 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:00.870 rmmod nvme_tcp 00:33:00.870 rmmod nvme_fabrics 00:33:00.870 rmmod nvme_keyring 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 618451 ']' 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 618451 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 618451 ']' 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 618451 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 618451 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 618451' 00:33:00.870 killing process with pid 618451 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 618451 00:33:00.870 00:58:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 618451 00:33:00.871 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:00.871 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:00.871 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:00.871 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:00.871 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:00.871 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.871 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:00.871 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.415 00:58:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:03.415 00:33:03.415 real 0m39.119s 00:33:03.415 user 1m41.176s 00:33:03.415 sys 0m10.577s 00:33:03.415 00:58:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:03.415 00:58:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:03.415 ************************************ 00:33:03.415 END TEST nvmf_host_multipath_status 00:33:03.415 ************************************ 00:33:03.415 00:58:21 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:03.415 00:58:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:03.415 00:58:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:03.415 00:58:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.415 ************************************ 00:33:03.415 START TEST nvmf_discovery_remove_ifc 00:33:03.415 ************************************ 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:03.415 * Looking for test storage... 00:33:03.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.415 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:33:03.416 00:58:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:10.037 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:10.037 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:10.037 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:10.037 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:10.037 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:10.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:33:10.298 00:33:10.298 --- 10.0.0.2 ping statistics --- 00:33:10.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.298 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:33:10.298 00:33:10.298 --- 10.0.0.1 ping statistics --- 00:33:10.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.298 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=628318 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 628318 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 628318 ']' 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:10.298 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.298 [2024-06-08 00:58:28.523799] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:33:10.298 [2024-06-08 00:58:28.523867] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.298 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.558 [2024-06-08 00:58:28.613310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.558 [2024-06-08 00:58:28.706095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.558 [2024-06-08 00:58:28.706157] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.558 [2024-06-08 00:58:28.706165] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.558 [2024-06-08 00:58:28.706172] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.558 [2024-06-08 00:58:28.706178] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.558 [2024-06-08 00:58:28.706206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.130 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:11.130 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:33:11.130 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:11.130 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:11.130 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.130 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.130 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:11.130 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.130 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.130 [2024-06-08 00:58:29.361965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.130 [2024-06-08 00:58:29.370191] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:11.130 null0 00:33:11.130 [2024-06-08 00:58:29.402145] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.389 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.389 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=628418 00:33:11.389 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 628418 /tmp/host.sock 00:33:11.389 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:11.389 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 628418 ']' 00:33:11.389 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:33:11.389 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:11.389 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:11.389 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:11.389 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:11.389 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.389 [2024-06-08 00:58:29.476656] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:33:11.389 [2024-06-08 00:58:29.476722] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628418 ] 00:33:11.389 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.389 [2024-06-08 00:58:29.541858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.389 [2024-06-08 00:58:29.616893] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.327 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:12.327 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:12.328 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.266 [2024-06-08 00:58:31.372596] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:13.266 [2024-06-08 00:58:31.372622] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:13.266 [2024-06-08 00:58:31.372637] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:13.266 [2024-06-08 00:58:31.461929] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:13.527 [2024-06-08 00:58:31.686032] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:13.527 [2024-06-08 00:58:31.686079] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:13.527 [2024-06-08 00:58:31.686102] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:13.527 [2024-06-08 00:58:31.686118] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:13.527 [2024-06-08 00:58:31.686139] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:13.527 [2024-06-08 00:58:31.690330] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x155b820 was disconnected and freed. delete nvme_qpair. 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:13.527 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:13.787 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:14.726 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:14.726 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.726 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:14.726 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.726 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:14.726 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.726 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:14.726 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.726 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:14.726 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:16.104 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:16.104 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.104 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:16.104 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:16.104 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:16.104 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:16.104 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:16.104 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:16.104 00:58:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:16.104 00:58:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:17.045 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:17.045 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.045 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:17.045 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:17.045 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:17.045 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:17.045 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:17.045 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:17.045 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:17.045 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:17.984 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:17.984 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:17.984 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.984 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:17.984 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:17.984 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:17.984 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:17.984 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:17.984 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:17.984 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:18.922 [2024-06-08 00:58:37.126471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:18.922 [2024-06-08 00:58:37.126510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.922 [2024-06-08 00:58:37.126522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.922 [2024-06-08 00:58:37.126530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.922 [2024-06-08 00:58:37.126538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.922 [2024-06-08 00:58:37.126546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.922 [2024-06-08 00:58:37.126553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.922 [2024-06-08 00:58:37.126561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.922 [2024-06-08 00:58:37.126568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.922 [2024-06-08 00:58:37.126576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.922 [2024-06-08 00:58:37.126583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.922 [2024-06-08 00:58:37.126591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1522be0 is same with the state(5) to be set 00:33:18.922 [2024-06-08 00:58:37.136490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1522be0 (9): Bad file descriptor 00:33:18.922 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:18.922 [2024-06-08 00:58:37.146533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:18.922 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.922 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:18.922 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:18.922 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:18.922 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.922 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.307 [2024-06-08 00:58:38.194439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:20.307 [2024-06-08 00:58:38.194478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1522be0 with addr=10.0.0.2, port=4420 00:33:20.307 [2024-06-08 00:58:38.194489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1522be0 is same with the state(5) to be set 00:33:20.307 [2024-06-08 00:58:38.194511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1522be0 (9): Bad file descriptor 00:33:20.307 [2024-06-08 00:58:38.194840] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:20.307 [2024-06-08 00:58:38.194858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:20.307 [2024-06-08 00:58:38.194870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:20.307 [2024-06-08 00:58:38.194879] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:20.307 [2024-06-08 00:58:38.194894] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.307 [2024-06-08 00:58:38.194903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:20.307 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:20.307 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:20.307 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:21.246 [2024-06-08 00:58:39.197287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.247 [2024-06-08 00:58:39.197325] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:21.247 [2024-06-08 00:58:39.197351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.247 [2024-06-08 00:58:39.197362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.247 [2024-06-08 00:58:39.197373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.247 [2024-06-08 00:58:39.197380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.247 [2024-06-08 00:58:39.197388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.247 [2024-06-08 00:58:39.197396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.247 [2024-06-08 00:58:39.197409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.247 [2024-06-08 00:58:39.197416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.247 [2024-06-08 00:58:39.197424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.247 [2024-06-08 00:58:39.197431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.247 [2024-06-08 00:58:39.197438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:21.247 [2024-06-08 00:58:39.197992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1522070 (9): Bad file descriptor 00:33:21.247 [2024-06-08 00:58:39.199003] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:21.247 [2024-06-08 00:58:39.199014] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:21.247 00:58:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:22.187 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:22.187 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:22.187 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.187 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:22.187 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:22.187 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:22.187 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.187 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:22.447 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:22.447 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:23.016 [2024-06-08 00:58:41.215966] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:23.016 [2024-06-08 00:58:41.215985] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:23.016 [2024-06-08 00:58:41.216000] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:23.276 [2024-06-08 00:58:41.346424] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:23.276 00:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.276 00:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.276 00:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.276 00:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.276 00:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:23.276 00:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.276 00:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.276 00:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:23.276 00:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:23.276 00:58:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:23.536 [2024-06-08 00:58:41.570883] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:23.536 [2024-06-08 00:58:41.570923] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:23.536 [2024-06-08 00:58:41.570944] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:23.536 [2024-06-08 00:58:41.570961] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:23.536 [2024-06-08 00:58:41.570969] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:23.536 [2024-06-08 00:58:41.574501] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1532740 was disconnected and freed. delete nvme_qpair. 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 628418 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 628418 ']' 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 628418 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 628418 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 628418' 00:33:24.476 killing process with pid 628418 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 628418 00:33:24.476 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 628418 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:24.737 rmmod nvme_tcp 00:33:24.737 rmmod nvme_fabrics 00:33:24.737 rmmod nvme_keyring 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 628318 ']' 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 628318 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 628318 ']' 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 628318 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 628318 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 628318' 00:33:24.737 killing process with pid 628318 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 628318 00:33:24.737 00:58:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 628318 00:33:24.737 00:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:24.737 00:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:24.737 00:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:24.737 00:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:24.737 00:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:24.737 00:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.737 00:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:24.737 00:58:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.360 00:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:27.360 00:33:27.360 real 0m23.803s 00:33:27.360 user 0m29.229s 00:33:27.360 sys 0m6.608s 00:33:27.360 00:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:27.360 00:58:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:27.360 ************************************ 00:33:27.360 END TEST nvmf_discovery_remove_ifc 00:33:27.360 ************************************ 00:33:27.360 00:58:45 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:27.360 00:58:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:27.360 00:58:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:27.360 00:58:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:27.360 ************************************ 00:33:27.360 START TEST nvmf_identify_kernel_target 00:33:27.360 ************************************ 00:33:27.360 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:27.360 * Looking for test storage... 00:33:27.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:27.360 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.360 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:27.360 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.360 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:27.361 00:58:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:33.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:33.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:33.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:33.948 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:33.948 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.949 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.949 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.949 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.949 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:33.949 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:34.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:34.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:33:34.210 00:33:34.210 --- 10.0.0.2 ping statistics --- 00:33:34.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.210 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:34.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:34.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:33:34.210 00:33:34.210 --- 10.0.0.1 ping statistics --- 00:33:34.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.210 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:34.210 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:34.211 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:34.211 00:58:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:37.509 Waiting for block devices as requested 00:33:37.509 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:37.509 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:37.769 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:37.769 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:37.769 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:38.030 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:38.030 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:38.030 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:38.290 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:38.290 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:38.290 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:38.551 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:38.551 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:38.551 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:38.811 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:38.811 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:38.811 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:39.072 No valid GPT data, bailing 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:39.072 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:33:39.333 00:33:39.333 Discovery Log Number of Records 2, Generation counter 2 00:33:39.333 =====Discovery Log Entry 0====== 00:33:39.333 trtype: tcp 00:33:39.333 adrfam: ipv4 00:33:39.333 subtype: current discovery subsystem 00:33:39.333 treq: not specified, sq flow control disable supported 00:33:39.334 portid: 1 00:33:39.334 trsvcid: 4420 00:33:39.334 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:39.334 traddr: 10.0.0.1 00:33:39.334 eflags: none 00:33:39.334 sectype: none 00:33:39.334 =====Discovery Log Entry 1====== 00:33:39.334 trtype: tcp 00:33:39.334 adrfam: ipv4 00:33:39.334 subtype: nvme subsystem 00:33:39.334 treq: not specified, sq flow control disable supported 00:33:39.334 portid: 1 00:33:39.334 trsvcid: 4420 00:33:39.334 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:39.334 traddr: 10.0.0.1 00:33:39.334 eflags: none 00:33:39.334 sectype: none 00:33:39.334 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:39.334 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:39.334 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.334 ===================================================== 00:33:39.334 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:39.334 ===================================================== 00:33:39.334 Controller Capabilities/Features 00:33:39.334 ================================ 00:33:39.334 Vendor ID: 0000 00:33:39.334 Subsystem Vendor ID: 0000 00:33:39.334 Serial Number: 0b9b6e40d2951541d1c3 00:33:39.334 Model Number: Linux 00:33:39.334 Firmware Version: 6.7.0-68 00:33:39.334 Recommended Arb Burst: 0 00:33:39.334 IEEE OUI Identifier: 00 00 00 00:33:39.334 Multi-path I/O 00:33:39.334 May have multiple subsystem ports: No 00:33:39.334 May have multiple controllers: No 00:33:39.334 Associated with SR-IOV VF: No 00:33:39.334 Max Data Transfer Size: Unlimited 00:33:39.334 Max Number of Namespaces: 0 00:33:39.334 Max Number of I/O Queues: 1024 00:33:39.334 NVMe Specification Version (VS): 1.3 00:33:39.334 NVMe Specification Version (Identify): 1.3 00:33:39.334 Maximum Queue Entries: 1024 00:33:39.334 Contiguous Queues Required: No 00:33:39.334 Arbitration Mechanisms Supported 00:33:39.334 Weighted Round Robin: Not Supported 00:33:39.334 Vendor Specific: Not Supported 00:33:39.334 Reset Timeout: 7500 ms 00:33:39.334 Doorbell Stride: 4 bytes 00:33:39.334 NVM Subsystem Reset: Not Supported 00:33:39.334 Command Sets Supported 00:33:39.334 NVM Command Set: Supported 00:33:39.334 Boot Partition: Not Supported 00:33:39.334 Memory Page Size Minimum: 4096 bytes 00:33:39.334 Memory Page Size Maximum: 4096 bytes 00:33:39.334 Persistent Memory Region: Not Supported 00:33:39.334 Optional Asynchronous Events Supported 00:33:39.334 Namespace Attribute Notices: Not Supported 00:33:39.334 Firmware Activation Notices: Not Supported 00:33:39.334 ANA Change Notices: Not Supported 00:33:39.334 PLE Aggregate Log Change Notices: Not Supported 00:33:39.334 LBA Status Info Alert Notices: Not Supported 00:33:39.334 EGE Aggregate Log Change Notices: Not Supported 00:33:39.334 Normal NVM Subsystem Shutdown event: Not Supported 00:33:39.334 Zone Descriptor Change Notices: Not Supported 00:33:39.334 Discovery Log Change Notices: Supported 00:33:39.334 Controller Attributes 00:33:39.334 128-bit Host Identifier: Not Supported 00:33:39.334 Non-Operational Permissive Mode: Not Supported 00:33:39.334 NVM Sets: Not Supported 00:33:39.334 Read Recovery Levels: Not Supported 00:33:39.334 Endurance Groups: Not Supported 00:33:39.334 Predictable Latency Mode: Not Supported 00:33:39.334 Traffic Based Keep ALive: Not Supported 00:33:39.334 Namespace Granularity: Not Supported 00:33:39.334 SQ Associations: Not Supported 00:33:39.334 UUID List: Not Supported 00:33:39.334 Multi-Domain Subsystem: Not Supported 00:33:39.334 Fixed Capacity Management: Not Supported 00:33:39.334 Variable Capacity Management: Not Supported 00:33:39.334 Delete Endurance Group: Not Supported 00:33:39.334 Delete NVM Set: Not Supported 00:33:39.334 Extended LBA Formats Supported: Not Supported 00:33:39.334 Flexible Data Placement Supported: Not Supported 00:33:39.334 00:33:39.334 Controller Memory Buffer Support 00:33:39.334 ================================ 00:33:39.334 Supported: No 00:33:39.334 00:33:39.334 Persistent Memory Region Support 00:33:39.334 ================================ 00:33:39.334 Supported: No 00:33:39.334 00:33:39.334 Admin Command Set Attributes 00:33:39.334 ============================ 00:33:39.334 Security Send/Receive: Not Supported 00:33:39.334 Format NVM: Not Supported 00:33:39.334 Firmware Activate/Download: Not Supported 00:33:39.334 Namespace Management: Not Supported 00:33:39.334 Device Self-Test: Not Supported 00:33:39.334 Directives: Not Supported 00:33:39.334 NVMe-MI: Not Supported 00:33:39.334 Virtualization Management: Not Supported 00:33:39.334 Doorbell Buffer Config: Not Supported 00:33:39.334 Get LBA Status Capability: Not Supported 00:33:39.334 Command & Feature Lockdown Capability: Not Supported 00:33:39.334 Abort Command Limit: 1 00:33:39.334 Async Event Request Limit: 1 00:33:39.334 Number of Firmware Slots: N/A 00:33:39.334 Firmware Slot 1 Read-Only: N/A 00:33:39.334 Firmware Activation Without Reset: N/A 00:33:39.334 Multiple Update Detection Support: N/A 00:33:39.334 Firmware Update Granularity: No Information Provided 00:33:39.334 Per-Namespace SMART Log: No 00:33:39.334 Asymmetric Namespace Access Log Page: Not Supported 00:33:39.334 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:39.334 Command Effects Log Page: Not Supported 00:33:39.334 Get Log Page Extended Data: Supported 00:33:39.334 Telemetry Log Pages: Not Supported 00:33:39.334 Persistent Event Log Pages: Not Supported 00:33:39.334 Supported Log Pages Log Page: May Support 00:33:39.334 Commands Supported & Effects Log Page: Not Supported 00:33:39.334 Feature Identifiers & Effects Log Page:May Support 00:33:39.334 NVMe-MI Commands & Effects Log Page: May Support 00:33:39.334 Data Area 4 for Telemetry Log: Not Supported 00:33:39.334 Error Log Page Entries Supported: 1 00:33:39.334 Keep Alive: Not Supported 00:33:39.334 00:33:39.334 NVM Command Set Attributes 00:33:39.334 ========================== 00:33:39.334 Submission Queue Entry Size 00:33:39.334 Max: 1 00:33:39.334 Min: 1 00:33:39.334 Completion Queue Entry Size 00:33:39.334 Max: 1 00:33:39.334 Min: 1 00:33:39.334 Number of Namespaces: 0 00:33:39.334 Compare Command: Not Supported 00:33:39.334 Write Uncorrectable Command: Not Supported 00:33:39.334 Dataset Management Command: Not Supported 00:33:39.334 Write Zeroes Command: Not Supported 00:33:39.334 Set Features Save Field: Not Supported 00:33:39.334 Reservations: Not Supported 00:33:39.334 Timestamp: Not Supported 00:33:39.334 Copy: Not Supported 00:33:39.334 Volatile Write Cache: Not Present 00:33:39.334 Atomic Write Unit (Normal): 1 00:33:39.334 Atomic Write Unit (PFail): 1 00:33:39.334 Atomic Compare & Write Unit: 1 00:33:39.334 Fused Compare & Write: Not Supported 00:33:39.334 Scatter-Gather List 00:33:39.334 SGL Command Set: Supported 00:33:39.334 SGL Keyed: Not Supported 00:33:39.334 SGL Bit Bucket Descriptor: Not Supported 00:33:39.334 SGL Metadata Pointer: Not Supported 00:33:39.334 Oversized SGL: Not Supported 00:33:39.334 SGL Metadata Address: Not Supported 00:33:39.334 SGL Offset: Supported 00:33:39.334 Transport SGL Data Block: Not Supported 00:33:39.334 Replay Protected Memory Block: Not Supported 00:33:39.334 00:33:39.334 Firmware Slot Information 00:33:39.334 ========================= 00:33:39.334 Active slot: 0 00:33:39.334 00:33:39.334 00:33:39.334 Error Log 00:33:39.334 ========= 00:33:39.334 00:33:39.334 Active Namespaces 00:33:39.334 ================= 00:33:39.334 Discovery Log Page 00:33:39.334 ================== 00:33:39.334 Generation Counter: 2 00:33:39.334 Number of Records: 2 00:33:39.334 Record Format: 0 00:33:39.334 00:33:39.334 Discovery Log Entry 0 00:33:39.334 ---------------------- 00:33:39.334 Transport Type: 3 (TCP) 00:33:39.334 Address Family: 1 (IPv4) 00:33:39.334 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:39.334 Entry Flags: 00:33:39.334 Duplicate Returned Information: 0 00:33:39.334 Explicit Persistent Connection Support for Discovery: 0 00:33:39.334 Transport Requirements: 00:33:39.334 Secure Channel: Not Specified 00:33:39.334 Port ID: 1 (0x0001) 00:33:39.334 Controller ID: 65535 (0xffff) 00:33:39.334 Admin Max SQ Size: 32 00:33:39.334 Transport Service Identifier: 4420 00:33:39.334 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:39.334 Transport Address: 10.0.0.1 00:33:39.334 Discovery Log Entry 1 00:33:39.334 ---------------------- 00:33:39.334 Transport Type: 3 (TCP) 00:33:39.334 Address Family: 1 (IPv4) 00:33:39.334 Subsystem Type: 2 (NVM Subsystem) 00:33:39.334 Entry Flags: 00:33:39.334 Duplicate Returned Information: 0 00:33:39.335 Explicit Persistent Connection Support for Discovery: 0 00:33:39.335 Transport Requirements: 00:33:39.335 Secure Channel: Not Specified 00:33:39.335 Port ID: 1 (0x0001) 00:33:39.335 Controller ID: 65535 (0xffff) 00:33:39.335 Admin Max SQ Size: 32 00:33:39.335 Transport Service Identifier: 4420 00:33:39.335 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:39.335 Transport Address: 10.0.0.1 00:33:39.335 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:39.335 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.335 get_feature(0x01) failed 00:33:39.335 get_feature(0x02) failed 00:33:39.335 get_feature(0x04) failed 00:33:39.335 ===================================================== 00:33:39.335 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:39.335 ===================================================== 00:33:39.335 Controller Capabilities/Features 00:33:39.335 ================================ 00:33:39.335 Vendor ID: 0000 00:33:39.335 Subsystem Vendor ID: 0000 00:33:39.335 Serial Number: 28180560b766c7ce377d 00:33:39.335 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:39.335 Firmware Version: 6.7.0-68 00:33:39.335 Recommended Arb Burst: 6 00:33:39.335 IEEE OUI Identifier: 00 00 00 00:33:39.335 Multi-path I/O 00:33:39.335 May have multiple subsystem ports: Yes 00:33:39.335 May have multiple controllers: Yes 00:33:39.335 Associated with SR-IOV VF: No 00:33:39.335 Max Data Transfer Size: Unlimited 00:33:39.335 Max Number of Namespaces: 1024 00:33:39.335 Max Number of I/O Queues: 128 00:33:39.335 NVMe Specification Version (VS): 1.3 00:33:39.335 NVMe Specification Version (Identify): 1.3 00:33:39.335 Maximum Queue Entries: 1024 00:33:39.335 Contiguous Queues Required: No 00:33:39.335 Arbitration Mechanisms Supported 00:33:39.335 Weighted Round Robin: Not Supported 00:33:39.335 Vendor Specific: Not Supported 00:33:39.335 Reset Timeout: 7500 ms 00:33:39.335 Doorbell Stride: 4 bytes 00:33:39.335 NVM Subsystem Reset: Not Supported 00:33:39.335 Command Sets Supported 00:33:39.335 NVM Command Set: Supported 00:33:39.335 Boot Partition: Not Supported 00:33:39.335 Memory Page Size Minimum: 4096 bytes 00:33:39.335 Memory Page Size Maximum: 4096 bytes 00:33:39.335 Persistent Memory Region: Not Supported 00:33:39.335 Optional Asynchronous Events Supported 00:33:39.335 Namespace Attribute Notices: Supported 00:33:39.335 Firmware Activation Notices: Not Supported 00:33:39.335 ANA Change Notices: Supported 00:33:39.335 PLE Aggregate Log Change Notices: Not Supported 00:33:39.335 LBA Status Info Alert Notices: Not Supported 00:33:39.335 EGE Aggregate Log Change Notices: Not Supported 00:33:39.335 Normal NVM Subsystem Shutdown event: Not Supported 00:33:39.335 Zone Descriptor Change Notices: Not Supported 00:33:39.335 Discovery Log Change Notices: Not Supported 00:33:39.335 Controller Attributes 00:33:39.335 128-bit Host Identifier: Supported 00:33:39.335 Non-Operational Permissive Mode: Not Supported 00:33:39.335 NVM Sets: Not Supported 00:33:39.335 Read Recovery Levels: Not Supported 00:33:39.335 Endurance Groups: Not Supported 00:33:39.335 Predictable Latency Mode: Not Supported 00:33:39.335 Traffic Based Keep ALive: Supported 00:33:39.335 Namespace Granularity: Not Supported 00:33:39.335 SQ Associations: Not Supported 00:33:39.335 UUID List: Not Supported 00:33:39.335 Multi-Domain Subsystem: Not Supported 00:33:39.335 Fixed Capacity Management: Not Supported 00:33:39.335 Variable Capacity Management: Not Supported 00:33:39.335 Delete Endurance Group: Not Supported 00:33:39.335 Delete NVM Set: Not Supported 00:33:39.335 Extended LBA Formats Supported: Not Supported 00:33:39.335 Flexible Data Placement Supported: Not Supported 00:33:39.335 00:33:39.335 Controller Memory Buffer Support 00:33:39.335 ================================ 00:33:39.335 Supported: No 00:33:39.335 00:33:39.335 Persistent Memory Region Support 00:33:39.335 ================================ 00:33:39.335 Supported: No 00:33:39.335 00:33:39.335 Admin Command Set Attributes 00:33:39.335 ============================ 00:33:39.335 Security Send/Receive: Not Supported 00:33:39.335 Format NVM: Not Supported 00:33:39.335 Firmware Activate/Download: Not Supported 00:33:39.335 Namespace Management: Not Supported 00:33:39.335 Device Self-Test: Not Supported 00:33:39.335 Directives: Not Supported 00:33:39.335 NVMe-MI: Not Supported 00:33:39.335 Virtualization Management: Not Supported 00:33:39.335 Doorbell Buffer Config: Not Supported 00:33:39.335 Get LBA Status Capability: Not Supported 00:33:39.335 Command & Feature Lockdown Capability: Not Supported 00:33:39.335 Abort Command Limit: 4 00:33:39.335 Async Event Request Limit: 4 00:33:39.335 Number of Firmware Slots: N/A 00:33:39.335 Firmware Slot 1 Read-Only: N/A 00:33:39.335 Firmware Activation Without Reset: N/A 00:33:39.335 Multiple Update Detection Support: N/A 00:33:39.335 Firmware Update Granularity: No Information Provided 00:33:39.335 Per-Namespace SMART Log: Yes 00:33:39.335 Asymmetric Namespace Access Log Page: Supported 00:33:39.335 ANA Transition Time : 10 sec 00:33:39.335 00:33:39.335 Asymmetric Namespace Access Capabilities 00:33:39.335 ANA Optimized State : Supported 00:33:39.335 ANA Non-Optimized State : Supported 00:33:39.335 ANA Inaccessible State : Supported 00:33:39.335 ANA Persistent Loss State : Supported 00:33:39.335 ANA Change State : Supported 00:33:39.335 ANAGRPID is not changed : No 00:33:39.335 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:39.335 00:33:39.335 ANA Group Identifier Maximum : 128 00:33:39.335 Number of ANA Group Identifiers : 128 00:33:39.335 Max Number of Allowed Namespaces : 1024 00:33:39.335 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:39.335 Command Effects Log Page: Supported 00:33:39.335 Get Log Page Extended Data: Supported 00:33:39.335 Telemetry Log Pages: Not Supported 00:33:39.335 Persistent Event Log Pages: Not Supported 00:33:39.335 Supported Log Pages Log Page: May Support 00:33:39.335 Commands Supported & Effects Log Page: Not Supported 00:33:39.335 Feature Identifiers & Effects Log Page:May Support 00:33:39.335 NVMe-MI Commands & Effects Log Page: May Support 00:33:39.335 Data Area 4 for Telemetry Log: Not Supported 00:33:39.335 Error Log Page Entries Supported: 128 00:33:39.335 Keep Alive: Supported 00:33:39.335 Keep Alive Granularity: 1000 ms 00:33:39.335 00:33:39.335 NVM Command Set Attributes 00:33:39.335 ========================== 00:33:39.335 Submission Queue Entry Size 00:33:39.335 Max: 64 00:33:39.335 Min: 64 00:33:39.335 Completion Queue Entry Size 00:33:39.335 Max: 16 00:33:39.335 Min: 16 00:33:39.335 Number of Namespaces: 1024 00:33:39.335 Compare Command: Not Supported 00:33:39.335 Write Uncorrectable Command: Not Supported 00:33:39.335 Dataset Management Command: Supported 00:33:39.335 Write Zeroes Command: Supported 00:33:39.335 Set Features Save Field: Not Supported 00:33:39.335 Reservations: Not Supported 00:33:39.335 Timestamp: Not Supported 00:33:39.335 Copy: Not Supported 00:33:39.335 Volatile Write Cache: Present 00:33:39.335 Atomic Write Unit (Normal): 1 00:33:39.335 Atomic Write Unit (PFail): 1 00:33:39.335 Atomic Compare & Write Unit: 1 00:33:39.335 Fused Compare & Write: Not Supported 00:33:39.335 Scatter-Gather List 00:33:39.335 SGL Command Set: Supported 00:33:39.335 SGL Keyed: Not Supported 00:33:39.335 SGL Bit Bucket Descriptor: Not Supported 00:33:39.335 SGL Metadata Pointer: Not Supported 00:33:39.335 Oversized SGL: Not Supported 00:33:39.335 SGL Metadata Address: Not Supported 00:33:39.335 SGL Offset: Supported 00:33:39.335 Transport SGL Data Block: Not Supported 00:33:39.335 Replay Protected Memory Block: Not Supported 00:33:39.335 00:33:39.335 Firmware Slot Information 00:33:39.335 ========================= 00:33:39.335 Active slot: 0 00:33:39.335 00:33:39.335 Asymmetric Namespace Access 00:33:39.335 =========================== 00:33:39.335 Change Count : 0 00:33:39.335 Number of ANA Group Descriptors : 1 00:33:39.335 ANA Group Descriptor : 0 00:33:39.335 ANA Group ID : 1 00:33:39.335 Number of NSID Values : 1 00:33:39.335 Change Count : 0 00:33:39.335 ANA State : 1 00:33:39.335 Namespace Identifier : 1 00:33:39.335 00:33:39.335 Commands Supported and Effects 00:33:39.335 ============================== 00:33:39.335 Admin Commands 00:33:39.335 -------------- 00:33:39.335 Get Log Page (02h): Supported 00:33:39.335 Identify (06h): Supported 00:33:39.335 Abort (08h): Supported 00:33:39.335 Set Features (09h): Supported 00:33:39.335 Get Features (0Ah): Supported 00:33:39.335 Asynchronous Event Request (0Ch): Supported 00:33:39.335 Keep Alive (18h): Supported 00:33:39.335 I/O Commands 00:33:39.335 ------------ 00:33:39.335 Flush (00h): Supported 00:33:39.335 Write (01h): Supported LBA-Change 00:33:39.335 Read (02h): Supported 00:33:39.336 Write Zeroes (08h): Supported LBA-Change 00:33:39.336 Dataset Management (09h): Supported 00:33:39.336 00:33:39.336 Error Log 00:33:39.336 ========= 00:33:39.336 Entry: 0 00:33:39.336 Error Count: 0x3 00:33:39.336 Submission Queue Id: 0x0 00:33:39.336 Command Id: 0x5 00:33:39.336 Phase Bit: 0 00:33:39.336 Status Code: 0x2 00:33:39.336 Status Code Type: 0x0 00:33:39.336 Do Not Retry: 1 00:33:39.336 Error Location: 0x28 00:33:39.336 LBA: 0x0 00:33:39.336 Namespace: 0x0 00:33:39.336 Vendor Log Page: 0x0 00:33:39.336 ----------- 00:33:39.336 Entry: 1 00:33:39.336 Error Count: 0x2 00:33:39.336 Submission Queue Id: 0x0 00:33:39.336 Command Id: 0x5 00:33:39.336 Phase Bit: 0 00:33:39.336 Status Code: 0x2 00:33:39.336 Status Code Type: 0x0 00:33:39.336 Do Not Retry: 1 00:33:39.336 Error Location: 0x28 00:33:39.336 LBA: 0x0 00:33:39.336 Namespace: 0x0 00:33:39.336 Vendor Log Page: 0x0 00:33:39.336 ----------- 00:33:39.336 Entry: 2 00:33:39.336 Error Count: 0x1 00:33:39.336 Submission Queue Id: 0x0 00:33:39.336 Command Id: 0x4 00:33:39.336 Phase Bit: 0 00:33:39.336 Status Code: 0x2 00:33:39.336 Status Code Type: 0x0 00:33:39.336 Do Not Retry: 1 00:33:39.336 Error Location: 0x28 00:33:39.336 LBA: 0x0 00:33:39.336 Namespace: 0x0 00:33:39.336 Vendor Log Page: 0x0 00:33:39.336 00:33:39.336 Number of Queues 00:33:39.336 ================ 00:33:39.336 Number of I/O Submission Queues: 128 00:33:39.336 Number of I/O Completion Queues: 128 00:33:39.336 00:33:39.336 ZNS Specific Controller Data 00:33:39.336 ============================ 00:33:39.336 Zone Append Size Limit: 0 00:33:39.336 00:33:39.336 00:33:39.336 Active Namespaces 00:33:39.336 ================= 00:33:39.336 get_feature(0x05) failed 00:33:39.336 Namespace ID:1 00:33:39.336 Command Set Identifier: NVM (00h) 00:33:39.336 Deallocate: Supported 00:33:39.336 Deallocated/Unwritten Error: Not Supported 00:33:39.336 Deallocated Read Value: Unknown 00:33:39.336 Deallocate in Write Zeroes: Not Supported 00:33:39.336 Deallocated Guard Field: 0xFFFF 00:33:39.336 Flush: Supported 00:33:39.336 Reservation: Not Supported 00:33:39.336 Namespace Sharing Capabilities: Multiple Controllers 00:33:39.336 Size (in LBAs): 3750748848 (1788GiB) 00:33:39.336 Capacity (in LBAs): 3750748848 (1788GiB) 00:33:39.336 Utilization (in LBAs): 3750748848 (1788GiB) 00:33:39.336 UUID: f2ee0998-1e32-4962-972b-dc90d381b5b2 00:33:39.336 Thin Provisioning: Not Supported 00:33:39.336 Per-NS Atomic Units: Yes 00:33:39.336 Atomic Write Unit (Normal): 8 00:33:39.336 Atomic Write Unit (PFail): 8 00:33:39.336 Preferred Write Granularity: 8 00:33:39.336 Atomic Compare & Write Unit: 8 00:33:39.336 Atomic Boundary Size (Normal): 0 00:33:39.336 Atomic Boundary Size (PFail): 0 00:33:39.336 Atomic Boundary Offset: 0 00:33:39.336 NGUID/EUI64 Never Reused: No 00:33:39.336 ANA group ID: 1 00:33:39.336 Namespace Write Protected: No 00:33:39.336 Number of LBA Formats: 1 00:33:39.336 Current LBA Format: LBA Format #00 00:33:39.336 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:39.336 00:33:39.336 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:39.336 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:39.336 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:39.336 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:39.336 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:39.336 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:39.336 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:39.336 rmmod nvme_tcp 00:33:39.336 rmmod nvme_fabrics 00:33:39.336 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:39.597 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:41.511 00:58:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:44.815 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:44.815 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:44.815 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:44.815 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:44.815 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:44.815 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:45.075 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:45.336 00:33:45.336 real 0m18.417s 00:33:45.336 user 0m5.003s 00:33:45.336 sys 0m10.442s 00:33:45.336 00:59:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:45.336 00:59:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.336 ************************************ 00:33:45.336 END TEST nvmf_identify_kernel_target 00:33:45.336 ************************************ 00:33:45.597 00:59:03 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:45.597 00:59:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:45.597 00:59:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:45.597 00:59:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.597 ************************************ 00:33:45.597 START TEST nvmf_auth_host 00:33:45.597 ************************************ 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:45.597 * Looking for test storage... 00:33:45.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.597 00:59:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:45.598 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:52.237 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:52.237 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:52.237 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:52.237 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:52.237 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.238 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.498 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.498 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.498 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:52.498 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.498 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.498 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.499 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:52.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:33:52.499 00:33:52.499 --- 10.0.0.2 ping statistics --- 00:33:52.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.499 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:33:52.499 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:33:52.759 00:33:52.759 --- 10.0.0.1 ping statistics --- 00:33:52.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.759 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=642575 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 642575 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 642575 ']' 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:52.759 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2ca6182ca22250a7186f054b347f03d3 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Ddx 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2ca6182ca22250a7186f054b347f03d3 0 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2ca6182ca22250a7186f054b347f03d3 0 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2ca6182ca22250a7186f054b347f03d3 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Ddx 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Ddx 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Ddx 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e86fe1d5db416d1135d65b80321df04259b43bf0fa69650f948e088d1ea327b8 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gSm 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e86fe1d5db416d1135d65b80321df04259b43bf0fa69650f948e088d1ea327b8 3 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e86fe1d5db416d1135d65b80321df04259b43bf0fa69650f948e088d1ea327b8 3 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e86fe1d5db416d1135d65b80321df04259b43bf0fa69650f948e088d1ea327b8 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gSm 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gSm 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gSm 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d74c32d879620adbd6815bf75ccf99af397967b2412c427 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nXY 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d74c32d879620adbd6815bf75ccf99af397967b2412c427 0 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d74c32d879620adbd6815bf75ccf99af397967b2412c427 0 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d74c32d879620adbd6815bf75ccf99af397967b2412c427 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nXY 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nXY 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nXY 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=581dc5cd38309ce541a5d4083ac5ef4261ccbe3d08d90f3b 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jyM 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 581dc5cd38309ce541a5d4083ac5ef4261ccbe3d08d90f3b 2 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 581dc5cd38309ce541a5d4083ac5ef4261ccbe3d08d90f3b 2 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=581dc5cd38309ce541a5d4083ac5ef4261ccbe3d08d90f3b 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jyM 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jyM 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jyM 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=489e8c7d7dbec04ebb497609bfe37e00 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.43i 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 489e8c7d7dbec04ebb497609bfe37e00 1 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 489e8c7d7dbec04ebb497609bfe37e00 1 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=489e8c7d7dbec04ebb497609bfe37e00 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:53.701 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.963 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.43i 00:33:53.963 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.43i 00:33:53.963 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.43i 00:33:53.963 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:53.963 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:53.963 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.963 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:53.963 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:53.963 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=07fc8a6326bf12b381e381c0c6c0e1c1 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8cm 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 07fc8a6326bf12b381e381c0c6c0e1c1 1 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 07fc8a6326bf12b381e381c0c6c0e1c1 1 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=07fc8a6326bf12b381e381c0c6c0e1c1 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8cm 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8cm 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8cm 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fc73c640476bc414a29d1538615278bff09c2852c0ad5625 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.v1D 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fc73c640476bc414a29d1538615278bff09c2852c0ad5625 2 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fc73c640476bc414a29d1538615278bff09c2852c0ad5625 2 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fc73c640476bc414a29d1538615278bff09c2852c0ad5625 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.v1D 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.v1D 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.v1D 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=770f77596a52c42fd1d846ca85284426 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.mod 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 770f77596a52c42fd1d846ca85284426 0 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 770f77596a52c42fd1d846ca85284426 0 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=770f77596a52c42fd1d846ca85284426 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.mod 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.mod 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.mod 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fabaa6aeac4f1ce16d53c8996d72d2740b25ba6d12ddea07232f1ac61d60e31b 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OtM 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fabaa6aeac4f1ce16d53c8996d72d2740b25ba6d12ddea07232f1ac61d60e31b 3 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fabaa6aeac4f1ce16d53c8996d72d2740b25ba6d12ddea07232f1ac61d60e31b 3 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fabaa6aeac4f1ce16d53c8996d72d2740b25ba6d12ddea07232f1ac61d60e31b 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OtM 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OtM 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.OtM 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 642575 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 642575 ']' 00:33:53.963 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ddx 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gSm ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gSm 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nXY 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jyM ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jyM 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.43i 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8cm ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8cm 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.v1D 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.mod ]] 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.mod 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.223 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.224 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.224 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:54.224 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.OtM 00:33:54.224 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.224 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.224 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.224 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:54.482 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:54.482 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:54.482 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:54.482 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:54.482 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:54.482 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.482 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.482 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:54.482 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.482 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:54.483 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:57.779 Waiting for block devices as requested 00:33:57.779 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:57.779 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:57.779 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:57.779 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:57.779 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:58.040 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:58.040 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:58.040 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:58.040 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:58.300 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:58.300 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:58.561 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:58.561 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:58.561 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:58.561 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:58.821 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:58.821 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:59.764 No valid GPT data, bailing 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:59.764 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:33:59.764 00:33:59.764 Discovery Log Number of Records 2, Generation counter 2 00:33:59.764 =====Discovery Log Entry 0====== 00:33:59.764 trtype: tcp 00:33:59.764 adrfam: ipv4 00:33:59.764 subtype: current discovery subsystem 00:33:59.764 treq: not specified, sq flow control disable supported 00:33:59.764 portid: 1 00:33:59.764 trsvcid: 4420 00:33:59.764 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:59.764 traddr: 10.0.0.1 00:33:59.764 eflags: none 00:33:59.764 sectype: none 00:33:59.764 =====Discovery Log Entry 1====== 00:33:59.764 trtype: tcp 00:33:59.764 adrfam: ipv4 00:33:59.764 subtype: nvme subsystem 00:33:59.764 treq: not specified, sq flow control disable supported 00:33:59.764 portid: 1 00:33:59.764 trsvcid: 4420 00:33:59.764 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:59.764 traddr: 10.0.0.1 00:33:59.764 eflags: none 00:33:59.764 sectype: none 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:59.764 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.024 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.024 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.024 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.024 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.024 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.024 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.024 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.024 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.024 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.025 nvme0n1 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.025 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.285 nvme0n1 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.285 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.286 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.286 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.286 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.546 nvme0n1 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.546 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.807 nvme0n1 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.807 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.069 nvme0n1 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.069 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.330 nvme0n1 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.330 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.590 nvme0n1 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.590 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.851 nvme0n1 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.851 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.851 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.851 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.851 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.851 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.851 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.851 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.851 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.851 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.851 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.852 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.852 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.852 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.852 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:01.852 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.852 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.112 nvme0n1 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.112 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.373 nvme0n1 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.373 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.633 nvme0n1 00:34:02.633 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.633 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.633 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.633 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.633 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.633 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.633 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.633 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.633 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.634 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.894 nvme0n1 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.894 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.154 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.414 nvme0n1 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.414 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.674 nvme0n1 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.674 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.935 nvme0n1 00:34:03.935 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.935 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.935 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.935 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.935 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.935 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.195 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.455 nvme0n1 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:04.455 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.456 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.025 nvme0n1 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.025 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.596 nvme0n1 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.596 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.857 nvme0n1 00:34:05.857 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.857 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.857 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.857 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.857 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.857 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.118 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.119 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.119 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.119 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:06.119 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.119 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.690 nvme0n1 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.690 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.951 nvme0n1 00:34:06.951 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.951 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.951 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.951 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.951 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.951 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.211 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.211 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.211 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.211 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.211 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.211 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.211 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.211 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:07.211 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.211 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.212 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.786 nvme0n1 00:34:07.786 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.786 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.786 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.786 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.786 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.047 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.619 nvme0n1 00:34:08.619 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.619 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.619 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.619 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.619 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:08.880 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.881 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.453 nvme0n1 00:34:09.453 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.714 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.285 nvme0n1 00:34:10.285 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.286 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.286 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.286 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.286 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.547 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.118 nvme0n1 00:34:11.118 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.118 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.118 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.118 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.118 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.118 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 nvme0n1 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.379 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.640 nvme0n1 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.640 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.900 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.900 nvme0n1 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.900 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.160 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.160 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.160 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.160 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.160 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.160 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.160 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.160 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.161 nvme0n1 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.161 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.421 nvme0n1 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.421 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.681 nvme0n1 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:12.681 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.682 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.942 nvme0n1 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.942 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.203 nvme0n1 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.203 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.464 nvme0n1 00:34:13.464 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.464 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.464 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.464 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.464 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.465 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.726 nvme0n1 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.726 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.726 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.986 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.247 nvme0n1 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.247 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.507 nvme0n1 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.507 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:14.508 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:14.508 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:14.508 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:14.508 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.508 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.767 nvme0n1 00:34:14.767 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.767 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.767 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.767 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.767 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.767 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.027 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.287 nvme0n1 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.287 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.547 nvme0n1 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.547 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.117 nvme0n1 00:34:16.117 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.117 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.117 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.117 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.117 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.117 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.117 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.118 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.690 nvme0n1 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.690 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.691 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.261 nvme0n1 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.261 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.262 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.831 nvme0n1 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.831 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.402 nvme0n1 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.402 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.973 nvme0n1 00:34:18.973 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.973 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.973 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.973 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.973 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.973 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.234 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.235 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.806 nvme0n1 00:34:19.806 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.806 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.806 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.806 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.806 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.806 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:20.066 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.067 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.636 nvme0n1 00:34:20.636 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.636 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.636 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.636 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.636 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.896 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.465 nvme0n1 00:34:21.465 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.465 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.466 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.466 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.466 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.726 00:59:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.334 nvme0n1 00:34:22.334 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.334 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.334 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.334 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.334 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.334 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.595 nvme0n1 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:22.595 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:22.855 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.855 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.855 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.855 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.855 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.855 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:22.855 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.855 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.855 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.856 00:59:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.856 nvme0n1 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.856 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.115 nvme0n1 00:34:23.115 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.116 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.376 nvme0n1 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.376 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.377 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.637 nvme0n1 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.637 00:59:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.897 nvme0n1 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.897 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.898 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.158 nvme0n1 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.158 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.419 nvme0n1 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.419 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.680 nvme0n1 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.680 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.681 00:59:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.940 nvme0n1 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:24.940 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.941 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.201 nvme0n1 00:34:25.201 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.201 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.201 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.201 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.201 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.201 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.460 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.719 nvme0n1 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.719 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.720 00:59:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.979 nvme0n1 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.979 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.238 nvme0n1 00:34:26.238 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.238 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.238 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.238 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.238 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.238 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.497 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.757 nvme0n1 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.757 00:59:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.327 nvme0n1 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.327 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.898 nvme0n1 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.898 00:59:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.898 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.898 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.898 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.898 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.899 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.468 nvme0n1 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.468 00:59:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.727 nvme0n1 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.986 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.555 nvme0n1 00:34:29.555 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.555 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.555 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.555 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.555 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.555 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.555 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.555 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNhNjE4MmNhMjIyNTBhNzE4NmYwNTRiMzQ3ZjAzZDOgiFGh: 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: ]] 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTg2ZmUxZDVkYjQxNmQxMTM1ZDY1YjgwMzIxZGYwNDI1OWI0M2JmMGZhNjk2NTBmOTQ4ZTA4OGQxZWEzMjdiOBISHPA=: 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.556 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.125 nvme0n1 00:34:30.125 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.125 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.125 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.125 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.125 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.125 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.385 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.386 00:59:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.957 nvme0n1 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDg5ZThjN2Q3ZGJlYzA0ZWJiNDk3NjA5YmZlMzdlMDChNEgd: 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: ]] 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDdmYzhhNjMyNmJmMTJiMzgxZTM4MWMwYzZjMGUxYzFFpMci: 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.957 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.897 nvme0n1 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmM3M2M2NDA0NzZiYzQxNGEyOWQxNTM4NjE1Mjc4YmZmMDljMjg1MmMwYWQ1NjI1uP4qlg==: 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: ]] 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzcwZjc3NTk2YTUyYzQyZmQxZDg0NmNhODUyODQ0MjYGMnuK: 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.897 00:59:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.897 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.836 nvme0n1 00:34:32.836 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:32.836 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.836 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.836 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:32.836 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.836 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFiYWE2YWVhYzRmMWNlMTZkNTNjODk5NmQ3MmQyNzQwYjI1YmE2ZDEyZGRlYTA3MjMyZjFhYzYxZDYwZTMxYo2kKhA=: 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:32.837 00:59:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.407 nvme0n1 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ3NGMzMmQ4Nzk2MjBhZGJkNjgxNWJmNzVjY2Y5OWFmMzk3OTY3YjI0MTJjNDI3CcYxhQ==: 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: ]] 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTgxZGM1Y2QzODMwOWNlNTQxYTVkNDA4M2FjNWVmNDI2MWNjYmUzZDA4ZDkwZjNig0p3rw==: 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.407 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.668 request: 00:34:33.668 { 00:34:33.668 "name": "nvme0", 00:34:33.668 "trtype": "tcp", 00:34:33.668 "traddr": "10.0.0.1", 00:34:33.668 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:33.668 "adrfam": "ipv4", 00:34:33.668 "trsvcid": "4420", 00:34:33.668 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:33.668 "method": "bdev_nvme_attach_controller", 00:34:33.668 "req_id": 1 00:34:33.668 } 00:34:33.668 Got JSON-RPC error response 00:34:33.668 response: 00:34:33.668 { 00:34:33.668 "code": -5, 00:34:33.668 "message": "Input/output error" 00:34:33.668 } 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.668 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.668 request: 00:34:33.668 { 00:34:33.668 "name": "nvme0", 00:34:33.668 "trtype": "tcp", 00:34:33.668 "traddr": "10.0.0.1", 00:34:33.668 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:33.668 "adrfam": "ipv4", 00:34:33.668 "trsvcid": "4420", 00:34:33.668 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:33.668 "dhchap_key": "key2", 00:34:33.668 "method": "bdev_nvme_attach_controller", 00:34:33.668 "req_id": 1 00:34:33.668 } 00:34:33.668 Got JSON-RPC error response 00:34:33.668 response: 00:34:33.668 { 00:34:33.669 "code": -5, 00:34:33.669 "message": "Input/output error" 00:34:33.669 } 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.669 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.929 request: 00:34:33.929 { 00:34:33.929 "name": "nvme0", 00:34:33.929 "trtype": "tcp", 00:34:33.929 "traddr": "10.0.0.1", 00:34:33.929 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:33.929 "adrfam": "ipv4", 00:34:33.929 "trsvcid": "4420", 00:34:33.929 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:33.929 "dhchap_key": "key1", 00:34:33.929 "dhchap_ctrlr_key": "ckey2", 00:34:33.929 "method": "bdev_nvme_attach_controller", 00:34:33.929 "req_id": 1 00:34:33.929 } 00:34:33.929 Got JSON-RPC error response 00:34:33.929 response: 00:34:33.929 { 00:34:33.929 "code": -5, 00:34:33.929 "message": "Input/output error" 00:34:33.929 } 00:34:33.929 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:33.929 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:34:33.929 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:33.929 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:33.929 00:59:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:33.929 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:33.929 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:33.929 00:59:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:33.929 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:33.930 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:33.930 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:33.930 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:33.930 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:33.930 00:59:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:33.930 rmmod nvme_tcp 00:34:33.930 rmmod nvme_fabrics 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 642575 ']' 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 642575 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 642575 ']' 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 642575 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 642575 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 642575' 00:34:33.930 killing process with pid 642575 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 642575 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 642575 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:33.930 00:59:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:36.471 00:59:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:39.017 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:39.017 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:39.017 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:39.017 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:39.017 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:39.017 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:39.017 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:39.017 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:39.277 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:39.277 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:39.277 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:39.277 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:39.277 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:39.277 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:39.277 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:39.277 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:39.277 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:34:39.538 00:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Ddx /tmp/spdk.key-null.nXY /tmp/spdk.key-sha256.43i /tmp/spdk.key-sha384.v1D /tmp/spdk.key-sha512.OtM /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:39.538 00:59:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:42.840 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:34:42.840 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:42.840 00:34:42.840 real 0m57.275s 00:34:42.840 user 0m51.582s 00:34:42.840 sys 0m14.451s 00:34:42.840 01:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:42.840 01:00:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.840 ************************************ 00:34:42.840 END TEST nvmf_auth_host 00:34:42.840 ************************************ 00:34:42.840 01:00:00 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:34:42.840 01:00:00 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:42.840 01:00:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:42.840 01:00:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:42.840 01:00:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.840 ************************************ 00:34:42.840 START TEST nvmf_digest 00:34:42.840 ************************************ 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:42.840 * Looking for test storage... 00:34:42.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:42.840 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:43.101 01:00:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:34:43.102 01:00:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:51.246 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:51.247 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:51.247 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:51.247 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:51.247 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:51.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:51.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.780 ms 00:34:51.247 00:34:51.247 --- 10.0.0.2 ping statistics --- 00:34:51.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.247 rtt min/avg/max/mdev = 0.780/0.780/0.780/0.000 ms 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:51.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:51.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:34:51.247 00:34:51.247 --- 10.0.0.1 ping statistics --- 00:34:51.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.247 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:51.247 ************************************ 00:34:51.247 START TEST nvmf_digest_clean 00:34:51.247 ************************************ 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=658996 00:34:51.247 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 658996 00:34:51.248 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:51.248 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 658996 ']' 00:34:51.248 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.248 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:51.248 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.248 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:51.248 01:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:51.248 [2024-06-08 01:00:08.540867] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:51.248 [2024-06-08 01:00:08.540950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:51.248 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.248 [2024-06-08 01:00:08.617163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.248 [2024-06-08 01:00:08.689935] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:51.248 [2024-06-08 01:00:08.689975] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:51.248 [2024-06-08 01:00:08.689982] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:51.248 [2024-06-08 01:00:08.689989] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:51.248 [2024-06-08 01:00:08.689994] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:51.248 [2024-06-08 01:00:08.690020] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:51.248 null0 00:34:51.248 [2024-06-08 01:00:09.432721] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.248 [2024-06-08 01:00:09.456898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=659333 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 659333 /var/tmp/bperf.sock 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 659333 ']' 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:51.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:51.248 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:51.248 [2024-06-08 01:00:09.509707] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:51.248 [2024-06-08 01:00:09.509755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659333 ] 00:34:51.507 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.507 [2024-06-08 01:00:09.585116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.507 [2024-06-08 01:00:09.649179] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.076 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:52.076 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:34:52.076 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:52.076 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:52.076 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:52.336 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.336 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.596 nvme0n1 00:34:52.856 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:52.856 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:52.856 Running I/O for 2 seconds... 00:34:54.765 00:34:54.765 Latency(us) 00:34:54.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.765 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:54.765 nvme0n1 : 2.00 20880.95 81.57 0.00 0.00 6122.17 2908.16 18459.31 00:34:54.765 =================================================================================================================== 00:34:54.765 Total : 20880.95 81.57 0.00 0.00 6122.17 2908.16 18459.31 00:34:54.765 0 00:34:54.765 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:54.765 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:54.765 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:54.765 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:54.765 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:54.765 | select(.opcode=="crc32c") 00:34:54.765 | "\(.module_name) \(.executed)"' 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 659333 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 659333 ']' 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 659333 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 659333 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 659333' 00:34:55.025 killing process with pid 659333 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 659333 00:34:55.025 Received shutdown signal, test time was about 2.000000 seconds 00:34:55.025 00:34:55.025 Latency(us) 00:34:55.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.025 =================================================================================================================== 00:34:55.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:55.025 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 659333 00:34:55.284 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:55.284 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:55.284 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:55.284 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:55.284 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=660485 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 660485 /var/tmp/bperf.sock 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 660485 ']' 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:55.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:55.285 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.285 [2024-06-08 01:00:13.391987] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:55.285 [2024-06-08 01:00:13.392041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660485 ] 00:34:55.285 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:55.285 Zero copy mechanism will not be used. 00:34:55.285 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.285 [2024-06-08 01:00:13.467190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.285 [2024-06-08 01:00:13.520646] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.232 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:56.232 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:34:56.232 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:56.232 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:56.232 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:56.232 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.232 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.493 nvme0n1 00:34:56.493 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:56.493 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:56.493 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:56.493 Zero copy mechanism will not be used. 00:34:56.493 Running I/O for 2 seconds... 00:34:59.036 00:34:59.036 Latency(us) 00:34:59.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.036 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:59.036 nvme0n1 : 2.00 2407.39 300.92 0.00 0.00 6642.52 2880.85 16056.32 00:34:59.036 =================================================================================================================== 00:34:59.036 Total : 2407.39 300.92 0.00 0.00 6642.52 2880.85 16056.32 00:34:59.036 0 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:59.036 | select(.opcode=="crc32c") 00:34:59.036 | "\(.module_name) \(.executed)"' 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 660485 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 660485 ']' 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 660485 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 660485 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 660485' 00:34:59.036 killing process with pid 660485 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 660485 00:34:59.036 Received shutdown signal, test time was about 2.000000 seconds 00:34:59.036 00:34:59.036 Latency(us) 00:34:59.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.036 =================================================================================================================== 00:34:59.036 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:59.036 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 660485 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=661165 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 661165 /var/tmp/bperf.sock 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 661165 ']' 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:59.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:59.036 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:59.036 [2024-06-08 01:00:17.104532] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:59.036 [2024-06-08 01:00:17.104589] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661165 ] 00:34:59.036 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.036 [2024-06-08 01:00:17.179742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.036 [2024-06-08 01:00:17.233320] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.608 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:59.608 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:34:59.608 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:59.608 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:59.608 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:59.869 01:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.869 01:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.443 nvme0n1 00:35:00.443 01:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:00.443 01:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:00.443 Running I/O for 2 seconds... 00:35:02.391 00:35:02.391 Latency(us) 00:35:02.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.391 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:02.391 nvme0n1 : 2.01 21318.76 83.28 0.00 0.00 5992.60 5215.57 15291.73 00:35:02.391 =================================================================================================================== 00:35:02.391 Total : 21318.76 83.28 0.00 0.00 5992.60 5215.57 15291.73 00:35:02.391 0 00:35:02.391 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:02.391 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:02.391 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:02.391 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:02.391 | select(.opcode=="crc32c") 00:35:02.391 | "\(.module_name) \(.executed)"' 00:35:02.391 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 661165 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 661165 ']' 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 661165 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 661165 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 661165' 00:35:02.652 killing process with pid 661165 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 661165 00:35:02.652 Received shutdown signal, test time was about 2.000000 seconds 00:35:02.652 00:35:02.652 Latency(us) 00:35:02.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.652 =================================================================================================================== 00:35:02.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 661165 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=661855 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 661855 /var/tmp/bperf.sock 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 661855 ']' 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:02.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:02.652 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:02.652 [2024-06-08 01:00:20.901489] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:02.652 [2024-06-08 01:00:20.901541] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661855 ] 00:35:02.652 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:02.652 Zero copy mechanism will not be used. 00:35:02.652 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.912 [2024-06-08 01:00:20.976178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.912 [2024-06-08 01:00:21.029057] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.483 01:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:03.483 01:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:35:03.483 01:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:03.483 01:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:03.483 01:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:03.743 01:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:03.743 01:00:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.003 nvme0n1 00:35:04.003 01:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:04.003 01:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:04.263 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.263 Zero copy mechanism will not be used. 00:35:04.263 Running I/O for 2 seconds... 00:35:06.202 00:35:06.202 Latency(us) 00:35:06.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.202 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:06.202 nvme0n1 : 2.00 3011.29 376.41 0.00 0.00 5304.20 2976.43 15837.87 00:35:06.202 =================================================================================================================== 00:35:06.202 Total : 3011.29 376.41 0.00 0.00 5304.20 2976.43 15837.87 00:35:06.202 0 00:35:06.202 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:06.202 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:06.202 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:06.202 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:06.202 | select(.opcode=="crc32c") 00:35:06.202 | "\(.module_name) \(.executed)"' 00:35:06.202 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 661855 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 661855 ']' 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 661855 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 661855 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 661855' 00:35:06.463 killing process with pid 661855 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 661855 00:35:06.463 Received shutdown signal, test time was about 2.000000 seconds 00:35:06.463 00:35:06.463 Latency(us) 00:35:06.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.463 =================================================================================================================== 00:35:06.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 661855 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 658996 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 658996 ']' 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 658996 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 658996 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 658996' 00:35:06.463 killing process with pid 658996 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 658996 00:35:06.463 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 658996 00:35:06.724 00:35:06.724 real 0m16.380s 00:35:06.724 user 0m32.161s 00:35:06.724 sys 0m3.218s 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:06.724 ************************************ 00:35:06.724 END TEST nvmf_digest_clean 00:35:06.724 ************************************ 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:06.724 ************************************ 00:35:06.724 START TEST nvmf_digest_error 00:35:06.724 ************************************ 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=662566 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 662566 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 662566 ']' 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:06.724 01:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:06.724 [2024-06-08 01:00:24.994978] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:06.724 [2024-06-08 01:00:24.995064] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:06.985 EAL: No free 2048 kB hugepages reported on node 1 00:35:06.985 [2024-06-08 01:00:25.065121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.985 [2024-06-08 01:00:25.132717] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:06.985 [2024-06-08 01:00:25.132755] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:06.985 [2024-06-08 01:00:25.132763] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:06.985 [2024-06-08 01:00:25.132769] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:06.985 [2024-06-08 01:00:25.132775] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:06.985 [2024-06-08 01:00:25.132797] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.556 [2024-06-08 01:00:25.798827] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.556 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.818 null0 00:35:07.818 [2024-06-08 01:00:25.879669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.818 [2024-06-08 01:00:25.903866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=662912 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 662912 /var/tmp/bperf.sock 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 662912 ']' 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:07.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:07.818 01:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.818 [2024-06-08 01:00:25.957285] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:07.818 [2024-06-08 01:00:25.957329] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662912 ] 00:35:07.818 EAL: No free 2048 kB hugepages reported on node 1 00:35:07.818 [2024-06-08 01:00:26.030099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.818 [2024-06-08 01:00:26.083563] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.760 01:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:08.760 01:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:35:08.760 01:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.760 01:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.760 01:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:08.760 01:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:08.760 01:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.760 01:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:08.760 01:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.760 01:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.020 nvme0n1 00:35:09.020 01:00:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:09.020 01:00:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.020 01:00:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.020 01:00:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.020 01:00:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:09.020 01:00:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.281 Running I/O for 2 seconds... 00:35:09.281 [2024-06-08 01:00:27.393252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.393281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.393290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.403989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.404009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.404017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.416711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.416734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.416740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.429878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.429896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.429903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.441109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.441128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.441134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.454298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.454316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.454323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.466114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.466132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.466139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.477579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.477597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.477604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.490682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.490700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.490707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.503260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.503278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.503286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.515841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.515859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.515865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.527077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.527094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.527101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.539957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.539975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.539981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.551749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.551767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.281 [2024-06-08 01:00:27.551773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.281 [2024-06-08 01:00:27.563949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.281 [2024-06-08 01:00:27.563967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.282 [2024-06-08 01:00:27.563974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.543 [2024-06-08 01:00:27.576695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.543 [2024-06-08 01:00:27.576712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.543 [2024-06-08 01:00:27.576719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.543 [2024-06-08 01:00:27.588717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.543 [2024-06-08 01:00:27.588734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.543 [2024-06-08 01:00:27.588741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.543 [2024-06-08 01:00:27.600239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.543 [2024-06-08 01:00:27.600256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.543 [2024-06-08 01:00:27.600263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.543 [2024-06-08 01:00:27.612567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.543 [2024-06-08 01:00:27.612585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.543 [2024-06-08 01:00:27.612591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.543 [2024-06-08 01:00:27.625450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.543 [2024-06-08 01:00:27.625467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.543 [2024-06-08 01:00:27.625480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.543 [2024-06-08 01:00:27.637194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.543 [2024-06-08 01:00:27.637211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.543 [2024-06-08 01:00:27.637217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.543 [2024-06-08 01:00:27.649955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.543 [2024-06-08 01:00:27.649973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.543 [2024-06-08 01:00:27.649979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.543 [2024-06-08 01:00:27.662505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.543 [2024-06-08 01:00:27.662522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.543 [2024-06-08 01:00:27.662528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.543 [2024-06-08 01:00:27.674356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.543 [2024-06-08 01:00:27.674373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.543 [2024-06-08 01:00:27.674380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.543 [2024-06-08 01:00:27.687518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.543 [2024-06-08 01:00:27.687535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.687542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.699468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.699485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.699492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.711410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.711427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.711434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.723680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.723697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.723704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.736184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.736204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.736210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.748190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.748207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.748213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.759752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.759769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.759775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.773452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.773469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.773476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.784628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.784645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.784651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.796342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.796359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.796366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.809659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.809677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.809684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.544 [2024-06-08 01:00:27.820771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.544 [2024-06-08 01:00:27.820788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.544 [2024-06-08 01:00:27.820794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.833842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.833860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.833866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.846652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.846670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.846677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.857547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.857563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.857570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.869777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.869795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.869803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.882164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.882181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.882188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.894547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.894565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.894571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.906366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.906383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.906389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.919716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.919734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.919740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.931985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.932002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.932009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.942528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.942546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.942556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.955279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.955296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.955303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.968149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.968166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.968173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.980150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.980168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.980174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:27.992326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:27.992343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:27.992349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:28.004062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:28.004080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:28.004086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:28.016591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:28.016608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:28.016614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:28.029164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:28.029181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:28.029187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:28.040973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:28.040990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:28.040997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:28.053033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:28.053054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:28.053060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:28.066103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:28.066121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:28.066127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.806 [2024-06-08 01:00:28.078623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:09.806 [2024-06-08 01:00:28.078640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.806 [2024-06-08 01:00:28.078646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.090924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.090942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.090950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.101496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.101513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.101520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.115572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.115590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.115596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.126945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.126962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.126969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.139338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.139356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.139362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.150940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.150957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.150967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.163723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.163741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.163748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.176492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.176509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.176515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.187538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.187555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.187561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.200469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.200486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.200492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.213229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.213246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.213252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.225191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.225208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.225214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.238301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.238318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.238324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.250310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.250328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.250334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.262310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.262329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.262335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.274403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.274420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.274426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.068 [2024-06-08 01:00:28.286571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.068 [2024-06-08 01:00:28.286588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.068 [2024-06-08 01:00:28.286595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.069 [2024-06-08 01:00:28.299221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.069 [2024-06-08 01:00:28.299237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.069 [2024-06-08 01:00:28.299244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.069 [2024-06-08 01:00:28.312631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.069 [2024-06-08 01:00:28.312648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.069 [2024-06-08 01:00:28.312654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.069 [2024-06-08 01:00:28.322980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.069 [2024-06-08 01:00:28.322997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.069 [2024-06-08 01:00:28.323003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.069 [2024-06-08 01:00:28.337093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.069 [2024-06-08 01:00:28.337111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.069 [2024-06-08 01:00:28.337118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.069 [2024-06-08 01:00:28.349321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.069 [2024-06-08 01:00:28.349339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.069 [2024-06-08 01:00:28.349346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.330 [2024-06-08 01:00:28.361232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.330 [2024-06-08 01:00:28.361249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.361255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.373045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.373063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.373070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.384725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.384742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.384748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.396773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.396790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.396796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.408982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.408999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.409005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.421795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.421813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.421819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.434290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.434308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.434315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.446878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.446896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.446902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.458281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.458298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.458304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.471568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.471585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.471594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.483307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.483324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.483330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.494935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.494952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.494958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.506703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.506720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.506726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.520506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.520522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.520529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.531326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.531343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.531350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.545323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.545340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.545347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.557413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.557431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.557438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.569381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.569398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.569410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.581772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.581793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.581799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.593789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.593807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.593813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.331 [2024-06-08 01:00:28.606190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.331 [2024-06-08 01:00:28.606208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.331 [2024-06-08 01:00:28.606214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.593 [2024-06-08 01:00:28.619861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.593 [2024-06-08 01:00:28.619879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.593 [2024-06-08 01:00:28.619886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.593 [2024-06-08 01:00:28.632184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.593 [2024-06-08 01:00:28.632203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.593 [2024-06-08 01:00:28.632211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.593 [2024-06-08 01:00:28.643894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.593 [2024-06-08 01:00:28.643912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.593 [2024-06-08 01:00:28.643918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.593 [2024-06-08 01:00:28.656237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.593 [2024-06-08 01:00:28.656256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.593 [2024-06-08 01:00:28.656264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.593 [2024-06-08 01:00:28.667739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.593 [2024-06-08 01:00:28.667758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.593 [2024-06-08 01:00:28.667765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.593 [2024-06-08 01:00:28.681127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.593 [2024-06-08 01:00:28.681145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.593 [2024-06-08 01:00:28.681151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.593 [2024-06-08 01:00:28.693089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.593 [2024-06-08 01:00:28.693106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.593 [2024-06-08 01:00:28.693113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.593 [2024-06-08 01:00:28.704325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.704342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.704349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.716753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.716770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.716776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.728983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.729000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.729007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.740179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.740197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.740203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.753545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.753563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.753569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.765352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.765370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.765377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.777479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.777496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.777503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.790380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.790398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.790411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.803709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.803727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.803735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.814415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.814432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.814438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.827006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.827023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.827030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.838041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.838058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.838065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.850417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.850435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.850442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.594 [2024-06-08 01:00:28.863786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.594 [2024-06-08 01:00:28.863804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.594 [2024-06-08 01:00:28.863810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.876236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.876254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.876260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.889329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.889345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.889352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.900855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.900875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.900882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.912112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.912129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.912136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.925484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.925501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.925508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.937562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.937580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.937587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.949267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.949284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.949290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.961614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.961633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.961640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.975128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.975146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.975152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.986790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.986809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.986816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:28.999338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:28.999356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:28.999362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:29.011206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:29.011223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:29.011229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:29.023706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:29.023723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:29.023729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:29.035942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.856 [2024-06-08 01:00:29.035959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.856 [2024-06-08 01:00:29.035965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.856 [2024-06-08 01:00:29.048282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.857 [2024-06-08 01:00:29.048301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.857 [2024-06-08 01:00:29.048309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.857 [2024-06-08 01:00:29.059906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.857 [2024-06-08 01:00:29.059923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.857 [2024-06-08 01:00:29.059930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.857 [2024-06-08 01:00:29.072387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.857 [2024-06-08 01:00:29.072408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.857 [2024-06-08 01:00:29.072416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.857 [2024-06-08 01:00:29.084734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.857 [2024-06-08 01:00:29.084753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.857 [2024-06-08 01:00:29.084759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.857 [2024-06-08 01:00:29.096843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.857 [2024-06-08 01:00:29.096860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.857 [2024-06-08 01:00:29.096867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.857 [2024-06-08 01:00:29.108698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.857 [2024-06-08 01:00:29.108716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.857 [2024-06-08 01:00:29.108725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.857 [2024-06-08 01:00:29.122380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.857 [2024-06-08 01:00:29.122398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.857 [2024-06-08 01:00:29.122410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.857 [2024-06-08 01:00:29.132752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:10.857 [2024-06-08 01:00:29.132769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.857 [2024-06-08 01:00:29.132775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.145945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.145963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.145970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.158548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.158565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.158572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.170289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.170306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.170312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.183243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.183261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.183268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.194667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.194684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.194691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.207751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.207768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.207775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.219833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.219850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.219856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.231471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.231489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.231495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.243110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.243128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.243134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.256083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.256099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.256106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.267836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.267853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.267859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.280726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.280742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.280749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.293147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.293163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.293170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.304853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.304870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.304876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.317992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.118 [2024-06-08 01:00:29.318008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.118 [2024-06-08 01:00:29.318017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.118 [2024-06-08 01:00:29.330227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.119 [2024-06-08 01:00:29.330245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.119 [2024-06-08 01:00:29.330251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.119 [2024-06-08 01:00:29.342934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.119 [2024-06-08 01:00:29.342952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.119 [2024-06-08 01:00:29.342960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.119 [2024-06-08 01:00:29.355654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.119 [2024-06-08 01:00:29.355672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.119 [2024-06-08 01:00:29.355679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.119 [2024-06-08 01:00:29.368063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.119 [2024-06-08 01:00:29.368080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.119 [2024-06-08 01:00:29.368086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.119 [2024-06-08 01:00:29.377337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe53ee0) 00:35:11.119 [2024-06-08 01:00:29.377353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.119 [2024-06-08 01:00:29.377359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.119 00:35:11.119 Latency(us) 00:35:11.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.119 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:11.119 nvme0n1 : 2.00 20737.63 81.01 0.00 0.00 6163.12 3440.64 18350.08 00:35:11.119 =================================================================================================================== 00:35:11.119 Total : 20737.63 81.01 0.00 0.00 6163.12 3440.64 18350.08 00:35:11.119 0 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:11.378 | .driver_specific 00:35:11.378 | .nvme_error 00:35:11.378 | .status_code 00:35:11.378 | .command_transient_transport_error' 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 662912 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 662912 ']' 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 662912 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 662912 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 662912' 00:35:11.378 killing process with pid 662912 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 662912 00:35:11.378 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.378 00:35:11.378 Latency(us) 00:35:11.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.378 =================================================================================================================== 00:35:11.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.378 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 662912 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=663596 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 663596 /var/tmp/bperf.sock 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 663596 ']' 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:11.639 01:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.639 [2024-06-08 01:00:29.774733] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:11.639 [2024-06-08 01:00:29.774786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663596 ] 00:35:11.639 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:11.639 Zero copy mechanism will not be used. 00:35:11.639 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.639 [2024-06-08 01:00:29.849509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.639 [2024-06-08 01:00:29.902324] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.582 01:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:12.582 01:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:35:12.582 01:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:12.582 01:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:12.582 01:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:12.582 01:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.582 01:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.582 01:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.582 01:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.582 01:00:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.842 nvme0n1 00:35:12.842 01:00:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:12.842 01:00:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.842 01:00:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.842 01:00:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.842 01:00:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:12.842 01:00:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:13.102 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:13.102 Zero copy mechanism will not be used. 00:35:13.102 Running I/O for 2 seconds... 00:35:13.102 [2024-06-08 01:00:31.202018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.102 [2024-06-08 01:00:31.202049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.102 [2024-06-08 01:00:31.202057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.102 [2024-06-08 01:00:31.215743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.102 [2024-06-08 01:00:31.215764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.102 [2024-06-08 01:00:31.215771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.102 [2024-06-08 01:00:31.229483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.102 [2024-06-08 01:00:31.229501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.102 [2024-06-08 01:00:31.229508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.102 [2024-06-08 01:00:31.243697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.102 [2024-06-08 01:00:31.243715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.102 [2024-06-08 01:00:31.243721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.102 [2024-06-08 01:00:31.258207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.102 [2024-06-08 01:00:31.258230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.102 [2024-06-08 01:00:31.258237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.102 [2024-06-08 01:00:31.271702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.102 [2024-06-08 01:00:31.271721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.102 [2024-06-08 01:00:31.271727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.102 [2024-06-08 01:00:31.286410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.102 [2024-06-08 01:00:31.286428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.102 [2024-06-08 01:00:31.286435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.102 [2024-06-08 01:00:31.301344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.102 [2024-06-08 01:00:31.301363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.102 [2024-06-08 01:00:31.301369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.103 [2024-06-08 01:00:31.315995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.103 [2024-06-08 01:00:31.316014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.103 [2024-06-08 01:00:31.316020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.103 [2024-06-08 01:00:31.330339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.103 [2024-06-08 01:00:31.330357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.103 [2024-06-08 01:00:31.330364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.103 [2024-06-08 01:00:31.343913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.103 [2024-06-08 01:00:31.343931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.103 [2024-06-08 01:00:31.343937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.103 [2024-06-08 01:00:31.357274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.103 [2024-06-08 01:00:31.357292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.103 [2024-06-08 01:00:31.357298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.103 [2024-06-08 01:00:31.370431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.103 [2024-06-08 01:00:31.370449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.103 [2024-06-08 01:00:31.370455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.103 [2024-06-08 01:00:31.383483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.103 [2024-06-08 01:00:31.383501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.103 [2024-06-08 01:00:31.383507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.396583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.396602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.396608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.410187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.410205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.410211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.424572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.424590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.424596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.439809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.439827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.439833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.455371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.455389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.455395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.470740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.470757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.470764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.486283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.486301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.486308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.500292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.500313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.500320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.511509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.511527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.511533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.523509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.523526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.523532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.536218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.536236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.536243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.551031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.551049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.551055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.566916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.566933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.566940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.581713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.581730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.581737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.595258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.364 [2024-06-08 01:00:31.595276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.364 [2024-06-08 01:00:31.595283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.364 [2024-06-08 01:00:31.610792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.365 [2024-06-08 01:00:31.610810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.365 [2024-06-08 01:00:31.610817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.365 [2024-06-08 01:00:31.624898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.365 [2024-06-08 01:00:31.624916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.365 [2024-06-08 01:00:31.624922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.365 [2024-06-08 01:00:31.638152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.365 [2024-06-08 01:00:31.638169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.365 [2024-06-08 01:00:31.638176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.651825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.651843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.651850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.667457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.667474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.667481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.681740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.681758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.681764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.693205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.693222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.693229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.703898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.703915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.703922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.717792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.717809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.717816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.732066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.732083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.732092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.745102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.745119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.745125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.760085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.760102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.760109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.773996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.774014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.774020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.788798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.788817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.788825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.803294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.803311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.803318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.816713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.626 [2024-06-08 01:00:31.816731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.626 [2024-06-08 01:00:31.816738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.626 [2024-06-08 01:00:31.831800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.627 [2024-06-08 01:00:31.831820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.627 [2024-06-08 01:00:31.831827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.627 [2024-06-08 01:00:31.845985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.627 [2024-06-08 01:00:31.846004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.627 [2024-06-08 01:00:31.846010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.627 [2024-06-08 01:00:31.861644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.627 [2024-06-08 01:00:31.861666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.627 [2024-06-08 01:00:31.861673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.627 [2024-06-08 01:00:31.876057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.627 [2024-06-08 01:00:31.876076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.627 [2024-06-08 01:00:31.876082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.627 [2024-06-08 01:00:31.888717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.627 [2024-06-08 01:00:31.888735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.627 [2024-06-08 01:00:31.888741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.627 [2024-06-08 01:00:31.901922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.627 [2024-06-08 01:00:31.901940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.627 [2024-06-08 01:00:31.901946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:31.917095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:31.917114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:31.917120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:31.932852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:31.932870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:31.932877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:31.944893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:31.944911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:31.944918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:31.957016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:31.957034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:31.957041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:31.969571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:31.969590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:31.969597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:31.983236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:31.983254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:31.983260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:31.994747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:31.994763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:31.994770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:32.009150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:32.009168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:32.009175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:32.023361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:32.023378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:32.023385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:32.036610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:32.036629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:32.036637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.888 [2024-06-08 01:00:32.050461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.888 [2024-06-08 01:00:32.050479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.888 [2024-06-08 01:00:32.050486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.889 [2024-06-08 01:00:32.063925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.889 [2024-06-08 01:00:32.063943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.889 [2024-06-08 01:00:32.063950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.889 [2024-06-08 01:00:32.077559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.889 [2024-06-08 01:00:32.077577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.889 [2024-06-08 01:00:32.077584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.889 [2024-06-08 01:00:32.093172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.889 [2024-06-08 01:00:32.093190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.889 [2024-06-08 01:00:32.093200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.889 [2024-06-08 01:00:32.108231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.889 [2024-06-08 01:00:32.108249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.889 [2024-06-08 01:00:32.108256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.889 [2024-06-08 01:00:32.122629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.889 [2024-06-08 01:00:32.122647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.889 [2024-06-08 01:00:32.122654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.889 [2024-06-08 01:00:32.134099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.889 [2024-06-08 01:00:32.134118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.889 [2024-06-08 01:00:32.134126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.889 [2024-06-08 01:00:32.146972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.889 [2024-06-08 01:00:32.146990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.889 [2024-06-08 01:00:32.146997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.889 [2024-06-08 01:00:32.159418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:13.889 [2024-06-08 01:00:32.159436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.889 [2024-06-08 01:00:32.159443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.149 [2024-06-08 01:00:32.171669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.149 [2024-06-08 01:00:32.171688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-06-08 01:00:32.171694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.149 [2024-06-08 01:00:32.185627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.149 [2024-06-08 01:00:32.185645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-06-08 01:00:32.185652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.149 [2024-06-08 01:00:32.198673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.149 [2024-06-08 01:00:32.198692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.149 [2024-06-08 01:00:32.198698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.149 [2024-06-08 01:00:32.211916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.149 [2024-06-08 01:00:32.211934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.211941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.226281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.226300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.226306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.239424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.239443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.239449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.249804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.249823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.249830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.264199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.264219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.264226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.278214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.278232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.278239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.293183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.293201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.293208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.308747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.308765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.308772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.322736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.322754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.322764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.337263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.337282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.337288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.350149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.350168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.350174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.364286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.364306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.364312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.376465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.376483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.376490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.390525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.390543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.390550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.404319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.404337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.404344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.418961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.418979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.418987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.150 [2024-06-08 01:00:32.431499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.150 [2024-06-08 01:00:32.431517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.150 [2024-06-08 01:00:32.431524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.411 [2024-06-08 01:00:32.444691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.411 [2024-06-08 01:00:32.444713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.411 [2024-06-08 01:00:32.444719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.411 [2024-06-08 01:00:32.458413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.411 [2024-06-08 01:00:32.458431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.411 [2024-06-08 01:00:32.458438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.411 [2024-06-08 01:00:32.472461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.411 [2024-06-08 01:00:32.472480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.411 [2024-06-08 01:00:32.472486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.411 [2024-06-08 01:00:32.485971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.411 [2024-06-08 01:00:32.485990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.411 [2024-06-08 01:00:32.485996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.411 [2024-06-08 01:00:32.501771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.411 [2024-06-08 01:00:32.501790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.411 [2024-06-08 01:00:32.501796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.411 [2024-06-08 01:00:32.516392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.411 [2024-06-08 01:00:32.516415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.411 [2024-06-08 01:00:32.516422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.411 [2024-06-08 01:00:32.531072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.411 [2024-06-08 01:00:32.531090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.411 [2024-06-08 01:00:32.531097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.546125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.546144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.546150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.561282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.561301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.561307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.575439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.575457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.575464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.590272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.590291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.590298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.605332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.605350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.605356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.618145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.618163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.618170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.630929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.630947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.630954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.646006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.646024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.646031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.661312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.661330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.661336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.676374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.676393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.676400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.412 [2024-06-08 01:00:32.691459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.412 [2024-06-08 01:00:32.691477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-06-08 01:00:32.691486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.704490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.704509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.704516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.715020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.715039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.715045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.728057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.728076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.728082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.742359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.742377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.742384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.756387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.756412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.756419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.771596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.771614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.771621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.783355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.783374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.783380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.795342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.795361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.795367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.809617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.809640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.809647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.823375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.823393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.823400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.836735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.836754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.836761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.849693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.849711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.849718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.862194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.862213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.862220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.876555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.876574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.876580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.891068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.891087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.891094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.907371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.907389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.907396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.923235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.923254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.923260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.938985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.939005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.939012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-06-08 01:00:32.953894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.674 [2024-06-08 01:00:32.953913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-06-08 01:00:32.953921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:32.967488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:32.967505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:32.967512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:32.981706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:32.981724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:32.981731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:32.994780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:32.994799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:32.994805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.009888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.009907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.009914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.024758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.024776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.024783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.034819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.034837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.034843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.048006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.048027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.048033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.060689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.060707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.060713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.073783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.073801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.073808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.087290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.087307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.087313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.100323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.100341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.100347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.113342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.113360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.113366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.126608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.126626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.126633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.140943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.140961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.140967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.155649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.155667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.937 [2024-06-08 01:00:33.155674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.937 [2024-06-08 01:00:33.170895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.937 [2024-06-08 01:00:33.170913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.938 [2024-06-08 01:00:33.170919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.938 [2024-06-08 01:00:33.186265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a0f10) 00:35:14.938 [2024-06-08 01:00:33.186283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.938 [2024-06-08 01:00:33.186289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.938 00:35:14.938 Latency(us) 00:35:14.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.938 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:14.938 nvme0n1 : 2.00 2230.73 278.84 0.00 0.00 7166.74 1802.24 16602.45 00:35:14.938 =================================================================================================================== 00:35:14.938 Total : 2230.73 278.84 0.00 0.00 7166.74 1802.24 16602.45 00:35:14.938 0 00:35:14.938 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:14.938 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:14.938 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:14.938 | .driver_specific 00:35:14.938 | .nvme_error 00:35:14.938 | .status_code 00:35:14.938 | .command_transient_transport_error' 00:35:14.938 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 663596 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 663596 ']' 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 663596 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 663596 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 663596' 00:35:15.199 killing process with pid 663596 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 663596 00:35:15.199 Received shutdown signal, test time was about 2.000000 seconds 00:35:15.199 00:35:15.199 Latency(us) 00:35:15.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.199 =================================================================================================================== 00:35:15.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:15.199 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 663596 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=664278 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 664278 /var/tmp/bperf.sock 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 664278 ']' 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:15.462 01:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.462 [2024-06-08 01:00:33.586499] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:15.462 [2024-06-08 01:00:33.586552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664278 ] 00:35:15.462 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.462 [2024-06-08 01:00:33.660600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.462 [2024-06-08 01:00:33.712624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.075 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:16.075 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:35:16.075 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:16.075 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:16.336 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:16.336 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.336 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.336 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.336 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:16.337 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:16.908 nvme0n1 00:35:16.908 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:16.908 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.908 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.908 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.908 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:16.908 01:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:16.908 Running I/O for 2 seconds... 00:35:16.908 [2024-06-08 01:00:35.013602] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.908 [2024-06-08 01:00:35.014013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.908 [2024-06-08 01:00:35.014040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.908 [2024-06-08 01:00:35.025843] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.908 [2024-06-08 01:00:35.026224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.908 [2024-06-08 01:00:35.026244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.908 [2024-06-08 01:00:35.037972] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.908 [2024-06-08 01:00:35.038374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.908 [2024-06-08 01:00:35.038391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.908 [2024-06-08 01:00:35.050146] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.050535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.050552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.062299] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.062718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.062735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.074570] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.074962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.074979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.086645] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.086942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.086959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.098778] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.099046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.099063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.110904] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.111302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.111319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.123050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.123419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.123436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.135111] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.135471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.135487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.147218] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.147647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.147663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.159282] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.159560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.159577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.171385] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.171637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.171653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.909 [2024-06-08 01:00:35.183530] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:16.909 [2024-06-08 01:00:35.183916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.909 [2024-06-08 01:00:35.183932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.170 [2024-06-08 01:00:35.195648] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.170 [2024-06-08 01:00:35.195908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.170 [2024-06-08 01:00:35.195924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.170 [2024-06-08 01:00:35.207726] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.170 [2024-06-08 01:00:35.208088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.170 [2024-06-08 01:00:35.208104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.170 [2024-06-08 01:00:35.219841] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.170 [2024-06-08 01:00:35.220228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.170 [2024-06-08 01:00:35.220244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.170 [2024-06-08 01:00:35.231911] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.170 [2024-06-08 01:00:35.232305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.170 [2024-06-08 01:00:35.232321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.170 [2024-06-08 01:00:35.243993] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.170 [2024-06-08 01:00:35.244247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.170 [2024-06-08 01:00:35.244263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.170 [2024-06-08 01:00:35.256110] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.170 [2024-06-08 01:00:35.256414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.170 [2024-06-08 01:00:35.256431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.170 [2024-06-08 01:00:35.268211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.170 [2024-06-08 01:00:35.268608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.170 [2024-06-08 01:00:35.268624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.170 [2024-06-08 01:00:35.280310] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.170 [2024-06-08 01:00:35.280738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.170 [2024-06-08 01:00:35.280754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.170 [2024-06-08 01:00:35.292407] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.170 [2024-06-08 01:00:35.292763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.170 [2024-06-08 01:00:35.292779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.170 [2024-06-08 01:00:35.304450] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.170 [2024-06-08 01:00:35.304803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.170 [2024-06-08 01:00:35.304819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.316591] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.316995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.317013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.328693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.329100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.329116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.340774] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.341030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.341045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.352915] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.353332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.353348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.364991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.365382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.365398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.377063] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.377466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.377482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.389151] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.389509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.389525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.401215] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.401631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.401647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.413451] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.413731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.413747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.425507] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.425774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.425791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.437814] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.438058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.438075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.171 [2024-06-08 01:00:35.449932] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.171 [2024-06-08 01:00:35.450329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.171 [2024-06-08 01:00:35.450345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.462011] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.462397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.462415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.474095] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.474364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.474380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.486234] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.486601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.486617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.498295] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.498646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.498662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.510434] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.510811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.510827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.522534] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.522912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.522928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.534614] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.535042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.535058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.546668] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.547060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.547075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.558807] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.559204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.559220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.570927] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.571317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.571334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.582991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.583375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.583391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.595046] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.432 [2024-06-08 01:00:35.595429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.432 [2024-06-08 01:00:35.595445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.432 [2024-06-08 01:00:35.607175] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.433 [2024-06-08 01:00:35.607431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.433 [2024-06-08 01:00:35.607446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.433 [2024-06-08 01:00:35.619218] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.433 [2024-06-08 01:00:35.619492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.433 [2024-06-08 01:00:35.619508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.433 [2024-06-08 01:00:35.631332] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.433 [2024-06-08 01:00:35.631730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.433 [2024-06-08 01:00:35.631749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.433 [2024-06-08 01:00:35.643399] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.433 [2024-06-08 01:00:35.643797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.433 [2024-06-08 01:00:35.643813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.433 [2024-06-08 01:00:35.655558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.433 [2024-06-08 01:00:35.655939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.433 [2024-06-08 01:00:35.655955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.433 [2024-06-08 01:00:35.667671] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.433 [2024-06-08 01:00:35.668024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.433 [2024-06-08 01:00:35.668040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.433 [2024-06-08 01:00:35.679768] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.433 [2024-06-08 01:00:35.680166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.433 [2024-06-08 01:00:35.680182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.433 [2024-06-08 01:00:35.691867] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.433 [2024-06-08 01:00:35.692126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.433 [2024-06-08 01:00:35.692142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.433 [2024-06-08 01:00:35.703987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.433 [2024-06-08 01:00:35.704421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.433 [2024-06-08 01:00:35.704437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.716125] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.716516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.716532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.728225] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.728604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.728621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.740288] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.740570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.740586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.752410] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.752780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.752795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.764530] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.764901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.764917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.776610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.777012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.777027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.788738] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.789009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.789024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.800783] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.801174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.801190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.812922] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.813304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.813320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.825030] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.825419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.825435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.837186] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.837442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.837457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.849242] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.849500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.849516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.694 [2024-06-08 01:00:35.861531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.694 [2024-06-08 01:00:35.861934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.694 [2024-06-08 01:00:35.861950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.695 [2024-06-08 01:00:35.873695] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.695 [2024-06-08 01:00:35.874069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.695 [2024-06-08 01:00:35.874085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.695 [2024-06-08 01:00:35.885764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.695 [2024-06-08 01:00:35.886151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.695 [2024-06-08 01:00:35.886168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.695 [2024-06-08 01:00:35.897938] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.695 [2024-06-08 01:00:35.898183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.695 [2024-06-08 01:00:35.898200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.695 [2024-06-08 01:00:35.910036] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.695 [2024-06-08 01:00:35.910417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.695 [2024-06-08 01:00:35.910433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.695 [2024-06-08 01:00:35.922177] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.695 [2024-06-08 01:00:35.922435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.695 [2024-06-08 01:00:35.922451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.695 [2024-06-08 01:00:35.934264] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.695 [2024-06-08 01:00:35.934690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.695 [2024-06-08 01:00:35.934706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.695 [2024-06-08 01:00:35.946366] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.695 [2024-06-08 01:00:35.946632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.695 [2024-06-08 01:00:35.946648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.695 [2024-06-08 01:00:35.958481] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.695 [2024-06-08 01:00:35.958894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.695 [2024-06-08 01:00:35.958910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.695 [2024-06-08 01:00:35.970611] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.695 [2024-06-08 01:00:35.970966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.695 [2024-06-08 01:00:35.970981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.957 [2024-06-08 01:00:35.982727] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.957 [2024-06-08 01:00:35.983077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.957 [2024-06-08 01:00:35.983092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.957 [2024-06-08 01:00:35.994905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.957 [2024-06-08 01:00:35.995324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.957 [2024-06-08 01:00:35.995340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.957 [2024-06-08 01:00:36.007017] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.957 [2024-06-08 01:00:36.007356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.957 [2024-06-08 01:00:36.007372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.957 [2024-06-08 01:00:36.019198] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.957 [2024-06-08 01:00:36.019612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.957 [2024-06-08 01:00:36.019627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.957 [2024-06-08 01:00:36.031301] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.957 [2024-06-08 01:00:36.031658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.957 [2024-06-08 01:00:36.031675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.957 [2024-06-08 01:00:36.043382] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.957 [2024-06-08 01:00:36.043808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.957 [2024-06-08 01:00:36.043824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.957 [2024-06-08 01:00:36.055453] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.055702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.055720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.067568] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.067963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.067979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.079643] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.080025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.080041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.091739] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.092132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.092149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.103901] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.104265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.104280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.115983] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.116243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.116259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.128095] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.128469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.128485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.140182] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.140573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.140588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.152300] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.152700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.152716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.164388] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.164805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.164820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.176495] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.176750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.176766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.188629] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.189009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.189025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.200684] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.201075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.201091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.212802] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.213215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.213232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.224920] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.225313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.225329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.958 [2024-06-08 01:00:36.237009] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:17.958 [2024-06-08 01:00:36.237260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.958 [2024-06-08 01:00:36.237277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.249068] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.249425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.249441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.261197] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.261507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.261523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.273260] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.273651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.273667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.285356] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.285732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.285748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.297480] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.297835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.297851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.309592] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.309927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.309943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.321690] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.322072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.322088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.333794] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.334189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.334204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.345897] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.346279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.346295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.357987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.358373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.358389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.370049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.370458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.370476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.382199] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.382450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.382465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.394297] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.394719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.394735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.406378] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.406766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.406782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.418507] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.418761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.418777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.430818] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.431193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.431208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.442909] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.443302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.443318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.455035] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.455281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.455297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.467136] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.467528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.467544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.479198] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.479537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.479555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.491259] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.491541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.491556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.503371] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.503809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.503825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.252 [2024-06-08 01:00:36.515465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.252 [2024-06-08 01:00:36.515730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.252 [2024-06-08 01:00:36.515744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.253 [2024-06-08 01:00:36.527582] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.253 [2024-06-08 01:00:36.528016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.253 [2024-06-08 01:00:36.528032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.539669] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.539950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.539964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.551778] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.552187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.552203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.563905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.564310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.564326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.575961] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.576332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.576348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.588057] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.588459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.588475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.600139] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.600566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.600582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.612243] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.612626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.612642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.624360] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.624769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.624784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.636457] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.636832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.636847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.648521] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.648768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.648783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.660606] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.660977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.660992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.672667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.673082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.673097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.684737] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.685104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.685120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.696841] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.697231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.697247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.708892] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.709280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.514 [2024-06-08 01:00:36.709296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.514 [2024-06-08 01:00:36.721032] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.514 [2024-06-08 01:00:36.721413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.515 [2024-06-08 01:00:36.721429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.515 [2024-06-08 01:00:36.733084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.515 [2024-06-08 01:00:36.733437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.515 [2024-06-08 01:00:36.733453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.515 [2024-06-08 01:00:36.745217] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.515 [2024-06-08 01:00:36.745643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.515 [2024-06-08 01:00:36.745659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.515 [2024-06-08 01:00:36.757292] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.515 [2024-06-08 01:00:36.757672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.515 [2024-06-08 01:00:36.757688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.515 [2024-06-08 01:00:36.769372] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.515 [2024-06-08 01:00:36.769632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.515 [2024-06-08 01:00:36.769648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.515 [2024-06-08 01:00:36.781441] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.515 [2024-06-08 01:00:36.781704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.515 [2024-06-08 01:00:36.781720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.515 [2024-06-08 01:00:36.793545] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.515 [2024-06-08 01:00:36.793789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.515 [2024-06-08 01:00:36.793808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.805640] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.806006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.806022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.817752] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.818142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.818158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.829829] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.830200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.830216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.841899] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.842165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.842181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.854063] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.854333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.854349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.866116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.866511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.866527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.878211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.878459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.878475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.890288] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.890550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.890565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.902358] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.902623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.902640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.914438] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.914845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.914860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.926554] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.926930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.926946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.938628] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.939040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.939055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.950718] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.950971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.950988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.962754] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.963023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.963038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.974831] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.975236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.975251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.986922] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.987334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.987349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 [2024-06-08 01:00:36.999018] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4370) with pdu=0x2000190feb58 00:35:18.776 [2024-06-08 01:00:36.999408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.776 [2024-06-08 01:00:36.999424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.776 00:35:18.776 Latency(us) 00:35:18.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.776 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:18.776 nvme0n1 : 2.01 21067.63 82.30 0.00 0.00 6063.88 4287.15 12451.84 00:35:18.776 =================================================================================================================== 00:35:18.776 Total : 21067.63 82.30 0.00 0.00 6063.88 4287.15 12451.84 00:35:18.776 0 00:35:18.776 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:18.776 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:18.776 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:18.776 | .driver_specific 00:35:18.776 | .nvme_error 00:35:18.776 | .status_code 00:35:18.776 | .command_transient_transport_error' 00:35:18.776 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 664278 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 664278 ']' 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 664278 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 664278 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 664278' 00:35:19.037 killing process with pid 664278 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 664278 00:35:19.037 Received shutdown signal, test time was about 2.000000 seconds 00:35:19.037 00:35:19.037 Latency(us) 00:35:19.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.037 =================================================================================================================== 00:35:19.037 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:19.037 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 664278 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=664966 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 664966 /var/tmp/bperf.sock 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 664966 ']' 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:19.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:19.298 01:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.298 [2024-06-08 01:00:37.409203] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:19.298 [2024-06-08 01:00:37.409260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664966 ] 00:35:19.298 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:19.298 Zero copy mechanism will not be used. 00:35:19.298 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.298 [2024-06-08 01:00:37.482847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.298 [2024-06-08 01:00:37.535677] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.239 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:20.239 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:35:20.239 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:20.239 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:20.239 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:20.239 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.239 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.239 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.239 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:20.239 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:20.499 nvme0n1 00:35:20.499 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:20.499 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.499 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.499 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.499 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:20.499 01:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:20.499 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:20.499 Zero copy mechanism will not be used. 00:35:20.499 Running I/O for 2 seconds... 00:35:20.760 [2024-06-08 01:00:38.798388] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.760 [2024-06-08 01:00:38.798859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.760 [2024-06-08 01:00:38.798885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.760 [2024-06-08 01:00:38.813094] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.760 [2024-06-08 01:00:38.813484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.760 [2024-06-08 01:00:38.813505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.760 [2024-06-08 01:00:38.826000] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.760 [2024-06-08 01:00:38.826339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.760 [2024-06-08 01:00:38.826358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.760 [2024-06-08 01:00:38.838145] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.760 [2024-06-08 01:00:38.838378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.760 [2024-06-08 01:00:38.838396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.760 [2024-06-08 01:00:38.848264] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.760 [2024-06-08 01:00:38.848603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.760 [2024-06-08 01:00:38.848619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.760 [2024-06-08 01:00:38.856145] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.760 [2024-06-08 01:00:38.856496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.760 [2024-06-08 01:00:38.856512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.865634] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.866077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.866094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.875423] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.875785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.875801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.883949] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.884074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.884089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.893244] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.893561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.893581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.902442] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.902775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.902791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.912595] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.912896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.912913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.922019] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.922360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.922378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.930681] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.931090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.931108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.939333] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.939651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.939668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.947398] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.947878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.947894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.957269] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.957684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.957701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.968186] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.968414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.968431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.978773] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.979098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.979114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.989181] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:38.989565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:38.989582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:38.999875] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:39.000000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:39.000015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:39.011566] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:39.011943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:39.011960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:39.023001] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:39.023430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:39.023446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.761 [2024-06-08 01:00:39.032903] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:20.761 [2024-06-08 01:00:39.033018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.761 [2024-06-08 01:00:39.033034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.022 [2024-06-08 01:00:39.043502] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.022 [2024-06-08 01:00:39.043848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.022 [2024-06-08 01:00:39.043864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.022 [2024-06-08 01:00:39.056203] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.022 [2024-06-08 01:00:39.056602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.022 [2024-06-08 01:00:39.056618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.022 [2024-06-08 01:00:39.067557] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.022 [2024-06-08 01:00:39.067804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.067821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.079747] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.080004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.080020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.092225] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.092456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.092473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.102517] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.102830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.102847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.113878] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.114228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.114244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.124184] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.124304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.124320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.133090] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.133338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.133354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.142143] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.142370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.142386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.150408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.150778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.150795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.158454] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.158546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.158565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.167066] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.167391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.167414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.176502] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.176856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.176872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.183654] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.183805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.183821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.191564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.191792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.191808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.200466] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.200804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.200821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.208515] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.208930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.208946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.216619] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.216940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.216956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.225656] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.226070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.226087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.233889] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.234009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.234024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.243130] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.243490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.243506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.252840] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.253154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.253170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.260838] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.261141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.261158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.269081] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.269409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.269426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.277326] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.277755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.277772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.285980] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.286286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.286303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.295666] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.023 [2024-06-08 01:00:39.295980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.023 [2024-06-08 01:00:39.295997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.023 [2024-06-08 01:00:39.304970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.284 [2024-06-08 01:00:39.305306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.284 [2024-06-08 01:00:39.305327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.314149] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.314488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.314505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.321481] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.321710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.321726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.329312] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.329532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.329549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.339187] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.339532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.339549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.347971] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.348311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.348328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.355429] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.355551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.355566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.364468] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.364910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.364929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.373308] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.373644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.373661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.381276] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.381618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.381636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.389753] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.390108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.390126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.400551] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.400892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.400910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.411105] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.411523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.411540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.419973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.420293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.420310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.429687] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.430028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.430044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.437453] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.437833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.437850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.444269] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.444567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.444584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.453380] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.453758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.453776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.461587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.461686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.461702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.473174] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.473494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.473512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.481991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.482337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.482354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.491300] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.491646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.491664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.500271] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.500630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.500648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.511010] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.511439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.511457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.521972] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.522072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.522088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.532572] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.285 [2024-06-08 01:00:39.532790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-06-08 01:00:39.532806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-06-08 01:00:39.542851] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.286 [2024-06-08 01:00:39.543167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-06-08 01:00:39.543187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.286 [2024-06-08 01:00:39.554408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.286 [2024-06-08 01:00:39.554784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-06-08 01:00:39.554801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.286 [2024-06-08 01:00:39.564893] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.286 [2024-06-08 01:00:39.564998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-06-08 01:00:39.565014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.576725] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.577082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.577100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.587667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.587968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.587985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.599740] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.600169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.600187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.610787] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.611124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.611141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.620868] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.621193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.621210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.631103] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.631422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.631440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.639216] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.639578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.639596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.647028] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.647460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.647478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.656511] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.656940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.656958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.664124] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.664340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.664357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.671427] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.671761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.671778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.680863] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.681173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.681191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.690047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.690382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.690400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.700557] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.700895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.700913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.710138] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.710441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.710459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.719613] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.719933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.719951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.728310] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.728679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.728697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.736963] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.547 [2024-06-08 01:00:39.737353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.547 [2024-06-08 01:00:39.737371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.547 [2024-06-08 01:00:39.745707] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.548 [2024-06-08 01:00:39.745934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-06-08 01:00:39.745951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.548 [2024-06-08 01:00:39.756180] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.548 [2024-06-08 01:00:39.756537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-06-08 01:00:39.756555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.548 [2024-06-08 01:00:39.767320] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.548 [2024-06-08 01:00:39.767551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-06-08 01:00:39.767568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.548 [2024-06-08 01:00:39.778883] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.548 [2024-06-08 01:00:39.779011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-06-08 01:00:39.779026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.548 [2024-06-08 01:00:39.789008] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.548 [2024-06-08 01:00:39.789183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-06-08 01:00:39.789198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.548 [2024-06-08 01:00:39.799513] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.548 [2024-06-08 01:00:39.799631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-06-08 01:00:39.799649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.548 [2024-06-08 01:00:39.810286] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.548 [2024-06-08 01:00:39.810613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-06-08 01:00:39.810631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.548 [2024-06-08 01:00:39.820397] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.548 [2024-06-08 01:00:39.820787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-06-08 01:00:39.820804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.808 [2024-06-08 01:00:39.831122] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.808 [2024-06-08 01:00:39.831463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.831480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.840539] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.840900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.840917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.849492] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.849814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.849831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.858478] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.858601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.858617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.868115] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.868465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.868482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.876381] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.876555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.876571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.885117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.885434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.885451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.895051] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.895381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.895398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.903944] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.904068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.904085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.912701] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.913050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.913067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.921155] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.921457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.921475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.930162] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.930583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.930601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.938664] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.938881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.938898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.946932] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.947373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.947391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.956317] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.956675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.956692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.967117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.967516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.967533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.977765] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.978109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.978126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.985680] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.985786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.985802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:39.995179] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:39.995581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:39.995599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:40.005149] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:40.005500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:40.005518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:40.014161] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:40.014259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:40.014273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:40.022039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:40.022196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:40.022211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:40.030560] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:40.030982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:40.030999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:40.038628] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:40.038858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:40.038878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:40.048306] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:40.048617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:40.048634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:40.059163] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:40.059391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.809 [2024-06-08 01:00:40.059412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.809 [2024-06-08 01:00:40.071185] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.809 [2024-06-08 01:00:40.071659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.810 [2024-06-08 01:00:40.071677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.810 [2024-06-08 01:00:40.081896] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:21.810 [2024-06-08 01:00:40.082245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.810 [2024-06-08 01:00:40.082263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.093313] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.071 [2024-06-08 01:00:40.093554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.071 [2024-06-08 01:00:40.093570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.105396] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.071 [2024-06-08 01:00:40.105753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.071 [2024-06-08 01:00:40.105769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.117038] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.071 [2024-06-08 01:00:40.117383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.071 [2024-06-08 01:00:40.117400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.130161] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.071 [2024-06-08 01:00:40.130515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.071 [2024-06-08 01:00:40.130532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.139749] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.071 [2024-06-08 01:00:40.140128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.071 [2024-06-08 01:00:40.140145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.149158] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.071 [2024-06-08 01:00:40.149506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.071 [2024-06-08 01:00:40.149523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.158760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.071 [2024-06-08 01:00:40.159064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.071 [2024-06-08 01:00:40.159081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.167730] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.071 [2024-06-08 01:00:40.167986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.071 [2024-06-08 01:00:40.168004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.178157] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.071 [2024-06-08 01:00:40.178557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.071 [2024-06-08 01:00:40.178574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.188509] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.071 [2024-06-08 01:00:40.188639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.071 [2024-06-08 01:00:40.188654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.071 [2024-06-08 01:00:40.200071] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.200435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.200452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.209905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.210056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.210071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.221890] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.222303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.222323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.231123] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.231350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.231365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.238523] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.238750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.238767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.246599] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.246994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.247011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.255573] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.255949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.255966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.264163] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.264344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.264359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.271594] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.271907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.271924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.278523] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.278755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.278771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.285764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.286076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.286093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.293058] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.293279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.293295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.300546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.300751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.300767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.310012] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.310273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.310290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.317350] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.317590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.317607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.325530] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.325789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.325806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.333640] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.333904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.333921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.343453] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.343763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.343780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.072 [2024-06-08 01:00:40.351087] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.072 [2024-06-08 01:00:40.351396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.072 [2024-06-08 01:00:40.351419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.333 [2024-06-08 01:00:40.359130] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.333 [2024-06-08 01:00:40.359337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.333 [2024-06-08 01:00:40.359353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.333 [2024-06-08 01:00:40.368596] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.333 [2024-06-08 01:00:40.368983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.333 [2024-06-08 01:00:40.369001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.333 [2024-06-08 01:00:40.378142] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.333 [2024-06-08 01:00:40.378528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.333 [2024-06-08 01:00:40.378545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.333 [2024-06-08 01:00:40.386905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.333 [2024-06-08 01:00:40.387151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.333 [2024-06-08 01:00:40.387169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.333 [2024-06-08 01:00:40.395228] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.333 [2024-06-08 01:00:40.395438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.333 [2024-06-08 01:00:40.395454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.333 [2024-06-08 01:00:40.404296] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.333 [2024-06-08 01:00:40.404590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.333 [2024-06-08 01:00:40.404607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.333 [2024-06-08 01:00:40.411594] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.333 [2024-06-08 01:00:40.411826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.333 [2024-06-08 01:00:40.411842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.333 [2024-06-08 01:00:40.420915] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.333 [2024-06-08 01:00:40.421263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.333 [2024-06-08 01:00:40.421280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.428213] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.428450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.428465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.436498] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.436786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.436807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.444564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.444886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.444903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.453870] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.454346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.454363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.463195] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.463564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.463581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.472409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.472812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.472828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.481260] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.481614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.481631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.488985] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.489227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.489244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.498373] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.498639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.498655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.506640] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.506850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.506866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.514461] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.514671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.514687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.521390] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.521603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.521619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.528540] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.528789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.528806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.536041] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.536264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.536281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.544047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.544276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.544293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.552179] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.552414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.552430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.559673] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.560028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.560046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.568168] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.568426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.568442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.576692] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.576976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.576993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.585410] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.585705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.585722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.593545] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.593894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.593911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.603221] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.603533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.603551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.334 [2024-06-08 01:00:40.611910] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.334 [2024-06-08 01:00:40.612154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.334 [2024-06-08 01:00:40.612170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.595 [2024-06-08 01:00:40.621819] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.595 [2024-06-08 01:00:40.622069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.595 [2024-06-08 01:00:40.622086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.595 [2024-06-08 01:00:40.630886] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.595 [2024-06-08 01:00:40.631165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.595 [2024-06-08 01:00:40.631182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.595 [2024-06-08 01:00:40.639489] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.595 [2024-06-08 01:00:40.639722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.595 [2024-06-08 01:00:40.639739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.595 [2024-06-08 01:00:40.647509] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.647729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.647744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.655222] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.655466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.655486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.662722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.663078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.663095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.670439] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.670680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.670696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.678884] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.679225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.679242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.686208] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.686472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.686489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.695100] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.695428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.695445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.704524] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.704894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.704911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.714633] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.714952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.714969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.724255] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.724550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.724567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.731441] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.731656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.731672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.740455] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.740829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.740846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.749116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.749477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.749494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.758654] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.759029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.759045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.767586] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.767884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.767901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.596 [2024-06-08 01:00:40.776996] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c4700) with pdu=0x2000190fef90 00:35:22.596 [2024-06-08 01:00:40.777369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.596 [2024-06-08 01:00:40.777385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.596 00:35:22.596 Latency(us) 00:35:22.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.596 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:22.596 nvme0n1 : 2.00 3316.39 414.55 0.00 0.00 4816.73 2676.05 17585.49 00:35:22.596 =================================================================================================================== 00:35:22.596 Total : 3316.39 414.55 0.00 0.00 4816.73 2676.05 17585.49 00:35:22.596 0 00:35:22.596 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:22.596 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:22.596 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:22.596 | .driver_specific 00:35:22.596 | .nvme_error 00:35:22.596 | .status_code 00:35:22.596 | .command_transient_transport_error' 00:35:22.596 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:22.857 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:35:22.857 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 664966 00:35:22.857 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 664966 ']' 00:35:22.857 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 664966 00:35:22.857 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:35:22.857 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:22.857 01:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 664966 00:35:22.857 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:22.857 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:22.857 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 664966' 00:35:22.857 killing process with pid 664966 00:35:22.857 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 664966 00:35:22.857 Received shutdown signal, test time was about 2.000000 seconds 00:35:22.857 00:35:22.857 Latency(us) 00:35:22.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.857 =================================================================================================================== 00:35:22.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:22.857 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 664966 00:35:22.857 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 662566 00:35:22.857 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 662566 ']' 00:35:22.857 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 662566 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 662566 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 662566' 00:35:23.118 killing process with pid 662566 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 662566 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 662566 00:35:23.118 00:35:23.118 real 0m16.394s 00:35:23.118 user 0m32.119s 00:35:23.118 sys 0m3.266s 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:23.118 ************************************ 00:35:23.118 END TEST nvmf_digest_error 00:35:23.118 ************************************ 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:23.118 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:23.118 rmmod nvme_tcp 00:35:23.378 rmmod nvme_fabrics 00:35:23.378 rmmod nvme_keyring 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 662566 ']' 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 662566 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 662566 ']' 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 662566 00:35:23.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (662566) - No such process 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 662566 is not found' 00:35:23.378 Process with pid 662566 is not found 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:23.378 01:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.291 01:00:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:25.291 00:35:25.291 real 0m42.519s 00:35:25.291 user 1m6.398s 00:35:25.291 sys 0m12.009s 00:35:25.291 01:00:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:25.292 01:00:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.292 ************************************ 00:35:25.292 END TEST nvmf_digest 00:35:25.292 ************************************ 00:35:25.292 01:00:43 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:35:25.292 01:00:43 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:35:25.292 01:00:43 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:35:25.292 01:00:43 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:25.292 01:00:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:25.292 01:00:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:25.292 01:00:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:25.553 ************************************ 00:35:25.553 START TEST nvmf_bdevperf 00:35:25.553 ************************************ 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:25.553 * Looking for test storage... 00:35:25.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:25.553 01:00:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:32.144 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:32.144 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:32.144 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:32.144 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:32.144 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:32.145 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:32.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:35:32.406 00:35:32.406 --- 10.0.0.2 ping statistics --- 00:35:32.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.406 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:32.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:35:32.406 00:35:32.406 --- 10.0.0.1 ping statistics --- 00:35:32.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.406 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=669769 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 669769 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 669769 ']' 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:32.406 01:00:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.406 [2024-06-08 01:00:50.554644] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:32.406 [2024-06-08 01:00:50.554696] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.406 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.406 [2024-06-08 01:00:50.636805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:32.667 [2024-06-08 01:00:50.703194] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.667 [2024-06-08 01:00:50.703228] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.667 [2024-06-08 01:00:50.703236] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.667 [2024-06-08 01:00:50.703242] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.667 [2024-06-08 01:00:50.703247] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.667 [2024-06-08 01:00:50.703377] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:32.667 [2024-06-08 01:00:50.703531] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:32.667 [2024-06-08 01:00:50.703650] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.239 [2024-06-08 01:00:51.370580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.239 Malloc0 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.239 [2024-06-08 01:00:51.442286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:33.239 { 00:35:33.239 "params": { 00:35:33.239 "name": "Nvme$subsystem", 00:35:33.239 "trtype": "$TEST_TRANSPORT", 00:35:33.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.239 "adrfam": "ipv4", 00:35:33.239 "trsvcid": "$NVMF_PORT", 00:35:33.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.239 "hdgst": ${hdgst:-false}, 00:35:33.239 "ddgst": ${ddgst:-false} 00:35:33.239 }, 00:35:33.239 "method": "bdev_nvme_attach_controller" 00:35:33.239 } 00:35:33.239 EOF 00:35:33.239 )") 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:33.239 01:00:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:33.239 "params": { 00:35:33.239 "name": "Nvme1", 00:35:33.239 "trtype": "tcp", 00:35:33.239 "traddr": "10.0.0.2", 00:35:33.239 "adrfam": "ipv4", 00:35:33.239 "trsvcid": "4420", 00:35:33.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:33.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:33.239 "hdgst": false, 00:35:33.239 "ddgst": false 00:35:33.239 }, 00:35:33.239 "method": "bdev_nvme_attach_controller" 00:35:33.239 }' 00:35:33.239 [2024-06-08 01:00:51.502419] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:33.239 [2024-06-08 01:00:51.502508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670005 ] 00:35:33.500 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.500 [2024-06-08 01:00:51.563170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.500 [2024-06-08 01:00:51.627376] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.761 Running I/O for 1 seconds... 00:35:34.708 00:35:34.708 Latency(us) 00:35:34.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.708 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:34.708 Verification LBA range: start 0x0 length 0x4000 00:35:34.708 Nvme1n1 : 1.01 9514.02 37.16 0.00 0.00 13383.47 1693.01 14199.47 00:35:34.708 =================================================================================================================== 00:35:34.708 Total : 9514.02 37.16 0.00 0.00 13383.47 1693.01 14199.47 00:35:34.974 01:00:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=670343 00:35:34.974 01:00:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:34.974 01:00:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:34.974 01:00:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:34.974 01:00:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:34.974 01:00:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:34.974 01:00:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.974 01:00:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.974 { 00:35:34.974 "params": { 00:35:34.974 "name": "Nvme$subsystem", 00:35:34.974 "trtype": "$TEST_TRANSPORT", 00:35:34.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.974 "adrfam": "ipv4", 00:35:34.974 "trsvcid": "$NVMF_PORT", 00:35:34.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.975 "hdgst": ${hdgst:-false}, 00:35:34.975 "ddgst": ${ddgst:-false} 00:35:34.975 }, 00:35:34.975 "method": "bdev_nvme_attach_controller" 00:35:34.975 } 00:35:34.975 EOF 00:35:34.975 )") 00:35:34.975 01:00:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:34.975 01:00:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:34.975 01:00:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:34.975 01:00:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:34.975 "params": { 00:35:34.975 "name": "Nvme1", 00:35:34.975 "trtype": "tcp", 00:35:34.975 "traddr": "10.0.0.2", 00:35:34.975 "adrfam": "ipv4", 00:35:34.975 "trsvcid": "4420", 00:35:34.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.975 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.975 "hdgst": false, 00:35:34.975 "ddgst": false 00:35:34.975 }, 00:35:34.975 "method": "bdev_nvme_attach_controller" 00:35:34.975 }' 00:35:34.975 [2024-06-08 01:00:53.086142] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:34.975 [2024-06-08 01:00:53.086197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670343 ] 00:35:34.975 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.975 [2024-06-08 01:00:53.143897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.975 [2024-06-08 01:00:53.207777] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.237 Running I/O for 15 seconds... 00:35:37.784 01:00:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 669769 00:35:37.784 01:00:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:37.784 [2024-06-08 01:00:56.052567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.784 [2024-06-08 01:00:56.052826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.784 [2024-06-08 01:00:56.052838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.052847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.052855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.052865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.052872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.052885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.052892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.052903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.052910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.052920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.052927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.052937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.052943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.052954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.052963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.052972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.052981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.052990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.052998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.785 [2024-06-08 01:00:56.053581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.785 [2024-06-08 01:00:56.053588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.053990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.053999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.786 [2024-06-08 01:00:56.054264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.786 [2024-06-08 01:00:56.054271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.787 [2024-06-08 01:00:56.054567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.787 [2024-06-08 01:00:56.054884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1649e70 is same with the state(5) to be set 00:35:37.787 [2024-06-08 01:00:56.054902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:37.787 [2024-06-08 01:00:56.054908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:37.787 [2024-06-08 01:00:56.054914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83000 len:8 PRP1 0x0 PRP2 0x0 00:35:37.787 [2024-06-08 01:00:56.054922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.787 [2024-06-08 01:00:56.054960] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1649e70 was disconnected and freed. reset controller. 00:35:37.787 [2024-06-08 01:00:56.058553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.787 [2024-06-08 01:00:56.058600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:37.787 [2024-06-08 01:00:56.059656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.788 [2024-06-08 01:00:56.059693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:37.788 [2024-06-08 01:00:56.059704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:37.788 [2024-06-08 01:00:56.059942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:37.788 [2024-06-08 01:00:56.060162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.788 [2024-06-08 01:00:56.060172] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.788 [2024-06-08 01:00:56.060180] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.788 [2024-06-08 01:00:56.063677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.050 [2024-06-08 01:00:56.072733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.050 [2024-06-08 01:00:56.073363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.050 [2024-06-08 01:00:56.073382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.050 [2024-06-08 01:00:56.073390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.050 [2024-06-08 01:00:56.073616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.050 [2024-06-08 01:00:56.073833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.050 [2024-06-08 01:00:56.073842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.050 [2024-06-08 01:00:56.073849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.050 [2024-06-08 01:00:56.077337] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.050 [2024-06-08 01:00:56.086605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.050 [2024-06-08 01:00:56.087200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.050 [2024-06-08 01:00:56.087216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.050 [2024-06-08 01:00:56.087224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.050 [2024-06-08 01:00:56.087445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.050 [2024-06-08 01:00:56.087662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.050 [2024-06-08 01:00:56.087672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.050 [2024-06-08 01:00:56.087679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.050 [2024-06-08 01:00:56.091178] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.050 [2024-06-08 01:00:56.100455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.050 [2024-06-08 01:00:56.101126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.050 [2024-06-08 01:00:56.101165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.050 [2024-06-08 01:00:56.101176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.050 [2024-06-08 01:00:56.101420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.050 [2024-06-08 01:00:56.101642] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.050 [2024-06-08 01:00:56.101652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.050 [2024-06-08 01:00:56.101660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.050 [2024-06-08 01:00:56.105161] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.050 [2024-06-08 01:00:56.114224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.050 [2024-06-08 01:00:56.114949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.050 [2024-06-08 01:00:56.114987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.050 [2024-06-08 01:00:56.114998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.050 [2024-06-08 01:00:56.115233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.050 [2024-06-08 01:00:56.115463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.050 [2024-06-08 01:00:56.115474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.050 [2024-06-08 01:00:56.115486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.050 [2024-06-08 01:00:56.118981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.050 [2024-06-08 01:00:56.128049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.050 [2024-06-08 01:00:56.128758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.050 [2024-06-08 01:00:56.128796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.050 [2024-06-08 01:00:56.128806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.050 [2024-06-08 01:00:56.129042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.050 [2024-06-08 01:00:56.129261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.050 [2024-06-08 01:00:56.129271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.050 [2024-06-08 01:00:56.129278] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.050 [2024-06-08 01:00:56.132780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.050 [2024-06-08 01:00:56.141842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.050 [2024-06-08 01:00:56.142617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.050 [2024-06-08 01:00:56.142655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.050 [2024-06-08 01:00:56.142665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.050 [2024-06-08 01:00:56.142900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.050 [2024-06-08 01:00:56.143120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.050 [2024-06-08 01:00:56.143129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.050 [2024-06-08 01:00:56.143137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.050 [2024-06-08 01:00:56.146642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.050 [2024-06-08 01:00:56.155709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.050 [2024-06-08 01:00:56.156315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.050 [2024-06-08 01:00:56.156333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.050 [2024-06-08 01:00:56.156341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.050 [2024-06-08 01:00:56.156564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.050 [2024-06-08 01:00:56.156781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.050 [2024-06-08 01:00:56.156790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.050 [2024-06-08 01:00:56.156797] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.050 [2024-06-08 01:00:56.160283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.050 [2024-06-08 01:00:56.169548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.050 [2024-06-08 01:00:56.170264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.050 [2024-06-08 01:00:56.170302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.170313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.051 [2024-06-08 01:00:56.170558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.051 [2024-06-08 01:00:56.170779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.051 [2024-06-08 01:00:56.170789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.051 [2024-06-08 01:00:56.170797] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.051 [2024-06-08 01:00:56.174290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.051 [2024-06-08 01:00:56.183354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.051 [2024-06-08 01:00:56.184082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.051 [2024-06-08 01:00:56.184120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.184131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.051 [2024-06-08 01:00:56.184366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.051 [2024-06-08 01:00:56.184596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.051 [2024-06-08 01:00:56.184606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.051 [2024-06-08 01:00:56.184614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.051 [2024-06-08 01:00:56.188106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.051 [2024-06-08 01:00:56.197181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.051 [2024-06-08 01:00:56.197893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.051 [2024-06-08 01:00:56.197931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.197942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.051 [2024-06-08 01:00:56.198176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.051 [2024-06-08 01:00:56.198396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.051 [2024-06-08 01:00:56.198414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.051 [2024-06-08 01:00:56.198421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.051 [2024-06-08 01:00:56.201914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.051 [2024-06-08 01:00:56.210978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.051 [2024-06-08 01:00:56.211650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.051 [2024-06-08 01:00:56.211688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.211698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.051 [2024-06-08 01:00:56.211934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.051 [2024-06-08 01:00:56.212157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.051 [2024-06-08 01:00:56.212167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.051 [2024-06-08 01:00:56.212175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.051 [2024-06-08 01:00:56.215681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.051 [2024-06-08 01:00:56.224745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.051 [2024-06-08 01:00:56.225295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.051 [2024-06-08 01:00:56.225314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.225321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.051 [2024-06-08 01:00:56.225544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.051 [2024-06-08 01:00:56.225761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.051 [2024-06-08 01:00:56.225770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.051 [2024-06-08 01:00:56.225777] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.051 [2024-06-08 01:00:56.229264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.051 [2024-06-08 01:00:56.238530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.051 [2024-06-08 01:00:56.239160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.051 [2024-06-08 01:00:56.239176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.239183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.051 [2024-06-08 01:00:56.239399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.051 [2024-06-08 01:00:56.239621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.051 [2024-06-08 01:00:56.239630] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.051 [2024-06-08 01:00:56.239637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.051 [2024-06-08 01:00:56.243125] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.051 [2024-06-08 01:00:56.252383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.051 [2024-06-08 01:00:56.252865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.051 [2024-06-08 01:00:56.252883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.252891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.051 [2024-06-08 01:00:56.253107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.051 [2024-06-08 01:00:56.253324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.051 [2024-06-08 01:00:56.253332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.051 [2024-06-08 01:00:56.253339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.051 [2024-06-08 01:00:56.256841] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.051 [2024-06-08 01:00:56.266308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.051 [2024-06-08 01:00:56.266947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.051 [2024-06-08 01:00:56.266963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.266971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.051 [2024-06-08 01:00:56.267186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.051 [2024-06-08 01:00:56.267408] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.051 [2024-06-08 01:00:56.267417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.051 [2024-06-08 01:00:56.267425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.051 [2024-06-08 01:00:56.270912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.051 [2024-06-08 01:00:56.280173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.051 [2024-06-08 01:00:56.280778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.051 [2024-06-08 01:00:56.280794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.280802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.051 [2024-06-08 01:00:56.281017] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.051 [2024-06-08 01:00:56.281233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.051 [2024-06-08 01:00:56.281242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.051 [2024-06-08 01:00:56.281248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.051 [2024-06-08 01:00:56.284739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.051 [2024-06-08 01:00:56.294009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.051 [2024-06-08 01:00:56.294614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.051 [2024-06-08 01:00:56.294630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.294637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.051 [2024-06-08 01:00:56.294852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.051 [2024-06-08 01:00:56.295069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.051 [2024-06-08 01:00:56.295078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.051 [2024-06-08 01:00:56.295085] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.051 [2024-06-08 01:00:56.298573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.051 [2024-06-08 01:00:56.307834] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.051 [2024-06-08 01:00:56.308460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.051 [2024-06-08 01:00:56.308476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.051 [2024-06-08 01:00:56.308487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.052 [2024-06-08 01:00:56.308702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.052 [2024-06-08 01:00:56.308919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.052 [2024-06-08 01:00:56.308927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.052 [2024-06-08 01:00:56.308934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.052 [2024-06-08 01:00:56.312430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.052 [2024-06-08 01:00:56.321695] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.052 [2024-06-08 01:00:56.322328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.052 [2024-06-08 01:00:56.322343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.052 [2024-06-08 01:00:56.322351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.052 [2024-06-08 01:00:56.322573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.052 [2024-06-08 01:00:56.322789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.052 [2024-06-08 01:00:56.322798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.052 [2024-06-08 01:00:56.322805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.052 [2024-06-08 01:00:56.326290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.314 [2024-06-08 01:00:56.335556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.314 [2024-06-08 01:00:56.336180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.314 [2024-06-08 01:00:56.336195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.314 [2024-06-08 01:00:56.336203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.314 [2024-06-08 01:00:56.336424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.314 [2024-06-08 01:00:56.336640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.314 [2024-06-08 01:00:56.336648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.314 [2024-06-08 01:00:56.336656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.314 [2024-06-08 01:00:56.340142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.314 [2024-06-08 01:00:56.349400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.314 [2024-06-08 01:00:56.350031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.314 [2024-06-08 01:00:56.350046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.314 [2024-06-08 01:00:56.350054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.314 [2024-06-08 01:00:56.350269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.314 [2024-06-08 01:00:56.350495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.314 [2024-06-08 01:00:56.350505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.314 [2024-06-08 01:00:56.350512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.314 [2024-06-08 01:00:56.353997] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.314 [2024-06-08 01:00:56.363255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.314 [2024-06-08 01:00:56.363902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.314 [2024-06-08 01:00:56.363917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.314 [2024-06-08 01:00:56.363925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.314 [2024-06-08 01:00:56.364140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.314 [2024-06-08 01:00:56.364356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.314 [2024-06-08 01:00:56.364365] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.314 [2024-06-08 01:00:56.364372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.314 [2024-06-08 01:00:56.367863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.314 [2024-06-08 01:00:56.377123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.314 [2024-06-08 01:00:56.377740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.314 [2024-06-08 01:00:56.377755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.314 [2024-06-08 01:00:56.377763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.314 [2024-06-08 01:00:56.377978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.314 [2024-06-08 01:00:56.378194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.314 [2024-06-08 01:00:56.378202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.314 [2024-06-08 01:00:56.378210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.314 [2024-06-08 01:00:56.381697] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.314 [2024-06-08 01:00:56.390963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.314 [2024-06-08 01:00:56.391587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.314 [2024-06-08 01:00:56.391603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.314 [2024-06-08 01:00:56.391612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.314 [2024-06-08 01:00:56.391827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.314 [2024-06-08 01:00:56.392043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.314 [2024-06-08 01:00:56.392052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.314 [2024-06-08 01:00:56.392059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.314 [2024-06-08 01:00:56.395550] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.314 [2024-06-08 01:00:56.404812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.314 [2024-06-08 01:00:56.405508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.314 [2024-06-08 01:00:56.405546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.314 [2024-06-08 01:00:56.405559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.314 [2024-06-08 01:00:56.405796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.314 [2024-06-08 01:00:56.406016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.314 [2024-06-08 01:00:56.406025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.314 [2024-06-08 01:00:56.406033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.314 [2024-06-08 01:00:56.409528] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.314 [2024-06-08 01:00:56.418581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.314 [2024-06-08 01:00:56.419297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.314 [2024-06-08 01:00:56.419335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.314 [2024-06-08 01:00:56.419346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.314 [2024-06-08 01:00:56.419589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.314 [2024-06-08 01:00:56.419809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.314 [2024-06-08 01:00:56.419819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.314 [2024-06-08 01:00:56.419827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.315 [2024-06-08 01:00:56.423315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.315 [2024-06-08 01:00:56.432379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.315 [2024-06-08 01:00:56.433035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.315 [2024-06-08 01:00:56.433054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.315 [2024-06-08 01:00:56.433063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.315 [2024-06-08 01:00:56.433279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.315 [2024-06-08 01:00:56.433536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.315 [2024-06-08 01:00:56.433546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.315 [2024-06-08 01:00:56.433553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.315 [2024-06-08 01:00:56.437038] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.315 [2024-06-08 01:00:56.446281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.315 [2024-06-08 01:00:56.446967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.315 [2024-06-08 01:00:56.447005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.315 [2024-06-08 01:00:56.447021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.315 [2024-06-08 01:00:56.447256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.315 [2024-06-08 01:00:56.447485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.315 [2024-06-08 01:00:56.447496] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.315 [2024-06-08 01:00:56.447503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.315 [2024-06-08 01:00:56.450996] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.315 [2024-06-08 01:00:56.460045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.315 [2024-06-08 01:00:56.460737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.315 [2024-06-08 01:00:56.460775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.315 [2024-06-08 01:00:56.460786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.315 [2024-06-08 01:00:56.461020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.315 [2024-06-08 01:00:56.461240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.315 [2024-06-08 01:00:56.461250] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.315 [2024-06-08 01:00:56.461257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.315 [2024-06-08 01:00:56.464752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.315 [2024-06-08 01:00:56.473802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.315 [2024-06-08 01:00:56.474449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.315 [2024-06-08 01:00:56.474468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.315 [2024-06-08 01:00:56.474476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.315 [2024-06-08 01:00:56.474692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.315 [2024-06-08 01:00:56.474908] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.315 [2024-06-08 01:00:56.474917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.315 [2024-06-08 01:00:56.474924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.315 [2024-06-08 01:00:56.478417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.315 [2024-06-08 01:00:56.487669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.315 [2024-06-08 01:00:56.488419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.315 [2024-06-08 01:00:56.488456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.315 [2024-06-08 01:00:56.488468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.315 [2024-06-08 01:00:56.488706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.315 [2024-06-08 01:00:56.488926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.315 [2024-06-08 01:00:56.488940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.315 [2024-06-08 01:00:56.488948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.315 [2024-06-08 01:00:56.492450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.315 [2024-06-08 01:00:56.501534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.315 [2024-06-08 01:00:56.502257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.315 [2024-06-08 01:00:56.502295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.315 [2024-06-08 01:00:56.502306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.315 [2024-06-08 01:00:56.502552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.315 [2024-06-08 01:00:56.502772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.315 [2024-06-08 01:00:56.502782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.315 [2024-06-08 01:00:56.502789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.315 [2024-06-08 01:00:56.506281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.315 [2024-06-08 01:00:56.515327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.315 [2024-06-08 01:00:56.516069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.315 [2024-06-08 01:00:56.516107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.315 [2024-06-08 01:00:56.516118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.315 [2024-06-08 01:00:56.516353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.315 [2024-06-08 01:00:56.516584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.315 [2024-06-08 01:00:56.516594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.315 [2024-06-08 01:00:56.516602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.315 [2024-06-08 01:00:56.520090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.315 [2024-06-08 01:00:56.529136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.315 [2024-06-08 01:00:56.529872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.315 [2024-06-08 01:00:56.529910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.315 [2024-06-08 01:00:56.529921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.315 [2024-06-08 01:00:56.530155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.315 [2024-06-08 01:00:56.530375] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.315 [2024-06-08 01:00:56.530385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.315 [2024-06-08 01:00:56.530393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.315 [2024-06-08 01:00:56.533889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.315 [2024-06-08 01:00:56.542939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.315 [2024-06-08 01:00:56.543674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.315 [2024-06-08 01:00:56.543712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.315 [2024-06-08 01:00:56.543725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.315 [2024-06-08 01:00:56.543961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.315 [2024-06-08 01:00:56.544181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.315 [2024-06-08 01:00:56.544191] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.315 [2024-06-08 01:00:56.544199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.315 [2024-06-08 01:00:56.547696] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.315 [2024-06-08 01:00:56.556739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.315 [2024-06-08 01:00:56.557485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.316 [2024-06-08 01:00:56.557523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.316 [2024-06-08 01:00:56.557535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.316 [2024-06-08 01:00:56.557772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.316 [2024-06-08 01:00:56.557992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.316 [2024-06-08 01:00:56.558002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.316 [2024-06-08 01:00:56.558009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.316 [2024-06-08 01:00:56.561515] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.316 [2024-06-08 01:00:56.570570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.316 [2024-06-08 01:00:56.571297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.316 [2024-06-08 01:00:56.571335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.316 [2024-06-08 01:00:56.571345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.316 [2024-06-08 01:00:56.571590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.316 [2024-06-08 01:00:56.571810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.316 [2024-06-08 01:00:56.571820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.316 [2024-06-08 01:00:56.571827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.316 [2024-06-08 01:00:56.575315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.316 [2024-06-08 01:00:56.584364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.316 [2024-06-08 01:00:56.585104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.316 [2024-06-08 01:00:56.585143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.316 [2024-06-08 01:00:56.585153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.316 [2024-06-08 01:00:56.585392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.316 [2024-06-08 01:00:56.585622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.316 [2024-06-08 01:00:56.585633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.316 [2024-06-08 01:00:56.585640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.316 [2024-06-08 01:00:56.589131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.579 [2024-06-08 01:00:56.598201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.579 [2024-06-08 01:00:56.598835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.579 [2024-06-08 01:00:56.598854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.579 [2024-06-08 01:00:56.598861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.579 [2024-06-08 01:00:56.599078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.579 [2024-06-08 01:00:56.599294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.579 [2024-06-08 01:00:56.599302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.579 [2024-06-08 01:00:56.599309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.579 [2024-06-08 01:00:56.602805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.579 [2024-06-08 01:00:56.612050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.579 [2024-06-08 01:00:56.612639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.579 [2024-06-08 01:00:56.612655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.579 [2024-06-08 01:00:56.612663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.579 [2024-06-08 01:00:56.612878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.579 [2024-06-08 01:00:56.613094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.579 [2024-06-08 01:00:56.613103] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.579 [2024-06-08 01:00:56.613110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.579 [2024-06-08 01:00:56.616596] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.579 [2024-06-08 01:00:56.625850] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.579 [2024-06-08 01:00:56.626481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.579 [2024-06-08 01:00:56.626497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.579 [2024-06-08 01:00:56.626505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.579 [2024-06-08 01:00:56.626721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.579 [2024-06-08 01:00:56.626937] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.579 [2024-06-08 01:00:56.626946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.579 [2024-06-08 01:00:56.626957] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.579 [2024-06-08 01:00:56.630443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.579 [2024-06-08 01:00:56.639690] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.579 [2024-06-08 01:00:56.640334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.579 [2024-06-08 01:00:56.640372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.579 [2024-06-08 01:00:56.640383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.579 [2024-06-08 01:00:56.640629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.579 [2024-06-08 01:00:56.640851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.579 [2024-06-08 01:00:56.640861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.579 [2024-06-08 01:00:56.640869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.579 [2024-06-08 01:00:56.644360] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.579 [2024-06-08 01:00:56.653608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.579 [2024-06-08 01:00:56.654301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.579 [2024-06-08 01:00:56.654339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.579 [2024-06-08 01:00:56.654349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.579 [2024-06-08 01:00:56.654594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.579 [2024-06-08 01:00:56.654815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.579 [2024-06-08 01:00:56.654825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.579 [2024-06-08 01:00:56.654832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.579 [2024-06-08 01:00:56.658318] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.579 [2024-06-08 01:00:56.667363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.579 [2024-06-08 01:00:56.668052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.579 [2024-06-08 01:00:56.668089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.579 [2024-06-08 01:00:56.668100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.579 [2024-06-08 01:00:56.668335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.579 [2024-06-08 01:00:56.668565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.579 [2024-06-08 01:00:56.668575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.579 [2024-06-08 01:00:56.668583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.672071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.681112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.681819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.580 [2024-06-08 01:00:56.681865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.580 [2024-06-08 01:00:56.681877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.580 [2024-06-08 01:00:56.682112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.580 [2024-06-08 01:00:56.682331] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.580 [2024-06-08 01:00:56.682341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.580 [2024-06-08 01:00:56.682349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.685846] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.694901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.695644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.580 [2024-06-08 01:00:56.695682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.580 [2024-06-08 01:00:56.695693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.580 [2024-06-08 01:00:56.695928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.580 [2024-06-08 01:00:56.696148] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.580 [2024-06-08 01:00:56.696159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.580 [2024-06-08 01:00:56.696166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.699664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.708712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.709453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.580 [2024-06-08 01:00:56.709491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.580 [2024-06-08 01:00:56.709502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.580 [2024-06-08 01:00:56.709736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.580 [2024-06-08 01:00:56.709957] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.580 [2024-06-08 01:00:56.709967] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.580 [2024-06-08 01:00:56.709974] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.713472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.722514] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.723238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.580 [2024-06-08 01:00:56.723276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.580 [2024-06-08 01:00:56.723286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.580 [2024-06-08 01:00:56.723531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.580 [2024-06-08 01:00:56.723756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.580 [2024-06-08 01:00:56.723766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.580 [2024-06-08 01:00:56.723774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.727261] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.736313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.737056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.580 [2024-06-08 01:00:56.737094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.580 [2024-06-08 01:00:56.737105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.580 [2024-06-08 01:00:56.737339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.580 [2024-06-08 01:00:56.737569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.580 [2024-06-08 01:00:56.737580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.580 [2024-06-08 01:00:56.737587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.741077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.750119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.750736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.580 [2024-06-08 01:00:56.750755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.580 [2024-06-08 01:00:56.750763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.580 [2024-06-08 01:00:56.750979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.580 [2024-06-08 01:00:56.751195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.580 [2024-06-08 01:00:56.751204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.580 [2024-06-08 01:00:56.751211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.754700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.763945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.764576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.580 [2024-06-08 01:00:56.764593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.580 [2024-06-08 01:00:56.764601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.580 [2024-06-08 01:00:56.764816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.580 [2024-06-08 01:00:56.765032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.580 [2024-06-08 01:00:56.765042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.580 [2024-06-08 01:00:56.765049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.768542] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.777781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.778508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.580 [2024-06-08 01:00:56.778546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.580 [2024-06-08 01:00:56.778556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.580 [2024-06-08 01:00:56.778791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.580 [2024-06-08 01:00:56.779011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.580 [2024-06-08 01:00:56.779021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.580 [2024-06-08 01:00:56.779028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.782525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.791579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.792306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.580 [2024-06-08 01:00:56.792343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.580 [2024-06-08 01:00:56.792356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.580 [2024-06-08 01:00:56.792602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.580 [2024-06-08 01:00:56.792823] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.580 [2024-06-08 01:00:56.792832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.580 [2024-06-08 01:00:56.792840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.796327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.805367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.806065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.580 [2024-06-08 01:00:56.806103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.580 [2024-06-08 01:00:56.806114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.580 [2024-06-08 01:00:56.806349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.580 [2024-06-08 01:00:56.806579] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.580 [2024-06-08 01:00:56.806589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.580 [2024-06-08 01:00:56.806596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.580 [2024-06-08 01:00:56.810084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.580 [2024-06-08 01:00:56.819141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.580 [2024-06-08 01:00:56.819840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.581 [2024-06-08 01:00:56.819878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.581 [2024-06-08 01:00:56.819893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.581 [2024-06-08 01:00:56.820129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.581 [2024-06-08 01:00:56.820348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.581 [2024-06-08 01:00:56.820358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.581 [2024-06-08 01:00:56.820365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.581 [2024-06-08 01:00:56.823863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.581 [2024-06-08 01:00:56.832921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.581 [2024-06-08 01:00:56.833667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.581 [2024-06-08 01:00:56.833706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.581 [2024-06-08 01:00:56.833717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.581 [2024-06-08 01:00:56.833952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.581 [2024-06-08 01:00:56.834171] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.581 [2024-06-08 01:00:56.834181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.581 [2024-06-08 01:00:56.834188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.581 [2024-06-08 01:00:56.837681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.581 [2024-06-08 01:00:56.846728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.581 [2024-06-08 01:00:56.847465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.581 [2024-06-08 01:00:56.847503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.581 [2024-06-08 01:00:56.847515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.581 [2024-06-08 01:00:56.847752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.581 [2024-06-08 01:00:56.847971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.581 [2024-06-08 01:00:56.847980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.581 [2024-06-08 01:00:56.847989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.581 [2024-06-08 01:00:56.851489] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.581 [2024-06-08 01:00:56.860540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.844 [2024-06-08 01:00:56.861289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.844 [2024-06-08 01:00:56.861327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.844 [2024-06-08 01:00:56.861339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.844 [2024-06-08 01:00:56.861582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.844 [2024-06-08 01:00:56.861803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.844 [2024-06-08 01:00:56.861817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.844 [2024-06-08 01:00:56.861825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.844 [2024-06-08 01:00:56.865314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.844 [2024-06-08 01:00:56.874378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.844 [2024-06-08 01:00:56.875029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.844 [2024-06-08 01:00:56.875049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.844 [2024-06-08 01:00:56.875057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.844 [2024-06-08 01:00:56.875273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.844 [2024-06-08 01:00:56.875496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.844 [2024-06-08 01:00:56.875506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.844 [2024-06-08 01:00:56.875513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.844 [2024-06-08 01:00:56.879002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.844 [2024-06-08 01:00:56.888143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.844 [2024-06-08 01:00:56.888749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.844 [2024-06-08 01:00:56.888767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.844 [2024-06-08 01:00:56.888774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.844 [2024-06-08 01:00:56.888991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.844 [2024-06-08 01:00:56.889207] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.844 [2024-06-08 01:00:56.889216] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.844 [2024-06-08 01:00:56.889222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.844 [2024-06-08 01:00:56.892724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.844 [2024-06-08 01:00:56.901983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.844 [2024-06-08 01:00:56.902727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.844 [2024-06-08 01:00:56.902765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.844 [2024-06-08 01:00:56.902777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.844 [2024-06-08 01:00:56.903014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.844 [2024-06-08 01:00:56.903234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.844 [2024-06-08 01:00:56.903243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.844 [2024-06-08 01:00:56.903251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.844 [2024-06-08 01:00:56.906747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.844 [2024-06-08 01:00:56.915801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.844 [2024-06-08 01:00:56.916501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.844 [2024-06-08 01:00:56.916539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.844 [2024-06-08 01:00:56.916552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.844 [2024-06-08 01:00:56.916788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.844 [2024-06-08 01:00:56.917007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.844 [2024-06-08 01:00:56.917017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.844 [2024-06-08 01:00:56.917024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.844 [2024-06-08 01:00:56.920523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.844 [2024-06-08 01:00:56.929569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.844 [2024-06-08 01:00:56.930284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.844 [2024-06-08 01:00:56.930322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.844 [2024-06-08 01:00:56.930333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.844 [2024-06-08 01:00:56.930577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.844 [2024-06-08 01:00:56.930798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.844 [2024-06-08 01:00:56.930807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.844 [2024-06-08 01:00:56.930815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.844 [2024-06-08 01:00:56.934303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.844 [2024-06-08 01:00:56.943357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.844 [2024-06-08 01:00:56.943972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.844 [2024-06-08 01:00:56.944010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.844 [2024-06-08 01:00:56.944020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.844 [2024-06-08 01:00:56.944255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.844 [2024-06-08 01:00:56.944484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.844 [2024-06-08 01:00:56.944495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.844 [2024-06-08 01:00:56.944502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.844 [2024-06-08 01:00:56.947991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.844 [2024-06-08 01:00:56.957243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.844 [2024-06-08 01:00:56.957943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.844 [2024-06-08 01:00:56.957981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.844 [2024-06-08 01:00:56.957996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.844 [2024-06-08 01:00:56.958231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.844 [2024-06-08 01:00:56.958460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.844 [2024-06-08 01:00:56.958470] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.844 [2024-06-08 01:00:56.958477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.844 [2024-06-08 01:00:56.961965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.844 [2024-06-08 01:00:56.971011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.844 [2024-06-08 01:00:56.971732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.844 [2024-06-08 01:00:56.971770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.844 [2024-06-08 01:00:56.971781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.844 [2024-06-08 01:00:56.972015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.844 [2024-06-08 01:00:56.972235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.844 [2024-06-08 01:00:56.972244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.844 [2024-06-08 01:00:56.972252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.844 [2024-06-08 01:00:56.975749] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.844 [2024-06-08 01:00:56.984796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.844 [2024-06-08 01:00:56.985494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.844 [2024-06-08 01:00:56.985532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:56.985543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:56.985777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:56.985997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.845 [2024-06-08 01:00:56.986006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.845 [2024-06-08 01:00:56.986014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.845 [2024-06-08 01:00:56.989511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.845 [2024-06-08 01:00:56.998568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.845 [2024-06-08 01:00:56.999306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.845 [2024-06-08 01:00:56.999344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:56.999355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:56.999600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:56.999820] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.845 [2024-06-08 01:00:56.999830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.845 [2024-06-08 01:00:56.999841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.845 [2024-06-08 01:00:57.003332] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.845 [2024-06-08 01:00:57.012419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.845 [2024-06-08 01:00:57.013059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.845 [2024-06-08 01:00:57.013078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:57.013086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:57.013302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:57.013525] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.845 [2024-06-08 01:00:57.013535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.845 [2024-06-08 01:00:57.013542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.845 [2024-06-08 01:00:57.017026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.845 [2024-06-08 01:00:57.026269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.845 [2024-06-08 01:00:57.026997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.845 [2024-06-08 01:00:57.027034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:57.027045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:57.027280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:57.027507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.845 [2024-06-08 01:00:57.027517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.845 [2024-06-08 01:00:57.027525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.845 [2024-06-08 01:00:57.031014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.845 [2024-06-08 01:00:57.040058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.845 [2024-06-08 01:00:57.040764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.845 [2024-06-08 01:00:57.040802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:57.040813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:57.041047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:57.041266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.845 [2024-06-08 01:00:57.041275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.845 [2024-06-08 01:00:57.041283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.845 [2024-06-08 01:00:57.044785] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.845 [2024-06-08 01:00:57.053831] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.845 [2024-06-08 01:00:57.054465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.845 [2024-06-08 01:00:57.054491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:57.054499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:57.054720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:57.054938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.845 [2024-06-08 01:00:57.054948] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.845 [2024-06-08 01:00:57.054955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.845 [2024-06-08 01:00:57.058446] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.845 [2024-06-08 01:00:57.067692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.845 [2024-06-08 01:00:57.068420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.845 [2024-06-08 01:00:57.068458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:57.068469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:57.068704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:57.068924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.845 [2024-06-08 01:00:57.068933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.845 [2024-06-08 01:00:57.068941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.845 [2024-06-08 01:00:57.072439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.845 [2024-06-08 01:00:57.081525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.845 [2024-06-08 01:00:57.082261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.845 [2024-06-08 01:00:57.082298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:57.082309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:57.082551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:57.082772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.845 [2024-06-08 01:00:57.082782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.845 [2024-06-08 01:00:57.082790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.845 [2024-06-08 01:00:57.086279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.845 [2024-06-08 01:00:57.095407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.845 [2024-06-08 01:00:57.096133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.845 [2024-06-08 01:00:57.096170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:57.096181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:57.096430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:57.096651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.845 [2024-06-08 01:00:57.096661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.845 [2024-06-08 01:00:57.096668] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.845 [2024-06-08 01:00:57.100157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.845 [2024-06-08 01:00:57.109205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.845 [2024-06-08 01:00:57.109887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.845 [2024-06-08 01:00:57.109926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:57.109937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:57.110172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:57.110392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.845 [2024-06-08 01:00:57.110409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.845 [2024-06-08 01:00:57.110417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.845 [2024-06-08 01:00:57.113910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.845 [2024-06-08 01:00:57.122960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.845 [2024-06-08 01:00:57.123579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.845 [2024-06-08 01:00:57.123597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:38.845 [2024-06-08 01:00:57.123606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:38.845 [2024-06-08 01:00:57.123822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:38.845 [2024-06-08 01:00:57.124038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.846 [2024-06-08 01:00:57.124047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.846 [2024-06-08 01:00:57.124054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.108 [2024-06-08 01:00:57.127539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.108 [2024-06-08 01:00:57.136786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.108 [2024-06-08 01:00:57.137415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.108 [2024-06-08 01:00:57.137432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.108 [2024-06-08 01:00:57.137439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.108 [2024-06-08 01:00:57.137655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.108 [2024-06-08 01:00:57.137871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.108 [2024-06-08 01:00:57.137880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.108 [2024-06-08 01:00:57.137892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.108 [2024-06-08 01:00:57.141372] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.108 [2024-06-08 01:00:57.150624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.108 [2024-06-08 01:00:57.151291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.108 [2024-06-08 01:00:57.151329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.108 [2024-06-08 01:00:57.151341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.108 [2024-06-08 01:00:57.151584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.108 [2024-06-08 01:00:57.151805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.108 [2024-06-08 01:00:57.151814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.108 [2024-06-08 01:00:57.151822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.108 [2024-06-08 01:00:57.155308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.108 [2024-06-08 01:00:57.164354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.108 [2024-06-08 01:00:57.165099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.108 [2024-06-08 01:00:57.165137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.108 [2024-06-08 01:00:57.165148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.108 [2024-06-08 01:00:57.165382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.108 [2024-06-08 01:00:57.165611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.108 [2024-06-08 01:00:57.165621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.108 [2024-06-08 01:00:57.165629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.108 [2024-06-08 01:00:57.169118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.108 [2024-06-08 01:00:57.178165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.108 [2024-06-08 01:00:57.178912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.108 [2024-06-08 01:00:57.178950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.108 [2024-06-08 01:00:57.178961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.108 [2024-06-08 01:00:57.179197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.108 [2024-06-08 01:00:57.179424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.108 [2024-06-08 01:00:57.179434] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.108 [2024-06-08 01:00:57.179442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.108 [2024-06-08 01:00:57.182930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.108 [2024-06-08 01:00:57.191985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.108 [2024-06-08 01:00:57.192691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.108 [2024-06-08 01:00:57.192734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.108 [2024-06-08 01:00:57.192745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.108 [2024-06-08 01:00:57.192980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.108 [2024-06-08 01:00:57.193200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.108 [2024-06-08 01:00:57.193210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.108 [2024-06-08 01:00:57.193217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.108 [2024-06-08 01:00:57.196715] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.108 [2024-06-08 01:00:57.205762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.108 [2024-06-08 01:00:57.206432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.108 [2024-06-08 01:00:57.206470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.108 [2024-06-08 01:00:57.206480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.108 [2024-06-08 01:00:57.206715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.108 [2024-06-08 01:00:57.206935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.108 [2024-06-08 01:00:57.206945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.108 [2024-06-08 01:00:57.206953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.108 [2024-06-08 01:00:57.210449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.108 [2024-06-08 01:00:57.219494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.108 [2024-06-08 01:00:57.220231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.108 [2024-06-08 01:00:57.220268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.108 [2024-06-08 01:00:57.220279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.220522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.109 [2024-06-08 01:00:57.220743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.109 [2024-06-08 01:00:57.220752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.109 [2024-06-08 01:00:57.220760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.109 [2024-06-08 01:00:57.224249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.109 [2024-06-08 01:00:57.233294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.109 [2024-06-08 01:00:57.233987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.109 [2024-06-08 01:00:57.234025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.109 [2024-06-08 01:00:57.234036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.234270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.109 [2024-06-08 01:00:57.234503] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.109 [2024-06-08 01:00:57.234514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.109 [2024-06-08 01:00:57.234521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.109 [2024-06-08 01:00:57.238010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.109 [2024-06-08 01:00:57.247055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.109 [2024-06-08 01:00:57.247733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.109 [2024-06-08 01:00:57.247771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.109 [2024-06-08 01:00:57.247782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.248017] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.109 [2024-06-08 01:00:57.248237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.109 [2024-06-08 01:00:57.248246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.109 [2024-06-08 01:00:57.248254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.109 [2024-06-08 01:00:57.251756] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.109 [2024-06-08 01:00:57.260802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.109 [2024-06-08 01:00:57.261511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.109 [2024-06-08 01:00:57.261549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.109 [2024-06-08 01:00:57.261561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.261798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.109 [2024-06-08 01:00:57.262017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.109 [2024-06-08 01:00:57.262027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.109 [2024-06-08 01:00:57.262035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.109 [2024-06-08 01:00:57.265532] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.109 [2024-06-08 01:00:57.274582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.109 [2024-06-08 01:00:57.275322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.109 [2024-06-08 01:00:57.275360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.109 [2024-06-08 01:00:57.275372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.275617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.109 [2024-06-08 01:00:57.275838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.109 [2024-06-08 01:00:57.275848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.109 [2024-06-08 01:00:57.275856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.109 [2024-06-08 01:00:57.279349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.109 [2024-06-08 01:00:57.288394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.109 [2024-06-08 01:00:57.289123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.109 [2024-06-08 01:00:57.289161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.109 [2024-06-08 01:00:57.289172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.289415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.109 [2024-06-08 01:00:57.289636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.109 [2024-06-08 01:00:57.289645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.109 [2024-06-08 01:00:57.289653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.109 [2024-06-08 01:00:57.293152] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.109 [2024-06-08 01:00:57.302199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.109 [2024-06-08 01:00:57.302897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.109 [2024-06-08 01:00:57.302935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.109 [2024-06-08 01:00:57.302946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.303181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.109 [2024-06-08 01:00:57.303411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.109 [2024-06-08 01:00:57.303422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.109 [2024-06-08 01:00:57.303429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.109 [2024-06-08 01:00:57.306921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.109 [2024-06-08 01:00:57.315971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.109 [2024-06-08 01:00:57.316743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.109 [2024-06-08 01:00:57.316781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.109 [2024-06-08 01:00:57.316791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.317026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.109 [2024-06-08 01:00:57.317246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.109 [2024-06-08 01:00:57.317256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.109 [2024-06-08 01:00:57.317263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.109 [2024-06-08 01:00:57.320761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.109 [2024-06-08 01:00:57.329806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.109 [2024-06-08 01:00:57.330511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.109 [2024-06-08 01:00:57.330549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.109 [2024-06-08 01:00:57.330565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.330803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.109 [2024-06-08 01:00:57.331023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.109 [2024-06-08 01:00:57.331033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.109 [2024-06-08 01:00:57.331040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.109 [2024-06-08 01:00:57.334538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.109 [2024-06-08 01:00:57.343581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.109 [2024-06-08 01:00:57.344217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.109 [2024-06-08 01:00:57.344235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.109 [2024-06-08 01:00:57.344244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.344464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.109 [2024-06-08 01:00:57.344681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.109 [2024-06-08 01:00:57.344689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.109 [2024-06-08 01:00:57.344696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.109 [2024-06-08 01:00:57.348181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.109 [2024-06-08 01:00:57.357431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.109 [2024-06-08 01:00:57.358094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.109 [2024-06-08 01:00:57.358132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.109 [2024-06-08 01:00:57.358143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.109 [2024-06-08 01:00:57.358378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.110 [2024-06-08 01:00:57.358607] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.110 [2024-06-08 01:00:57.358618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.110 [2024-06-08 01:00:57.358626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.110 [2024-06-08 01:00:57.362113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.110 [2024-06-08 01:00:57.371159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.110 [2024-06-08 01:00:57.371857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.110 [2024-06-08 01:00:57.371895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.110 [2024-06-08 01:00:57.371906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.110 [2024-06-08 01:00:57.372141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.110 [2024-06-08 01:00:57.372360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.110 [2024-06-08 01:00:57.372377] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.110 [2024-06-08 01:00:57.372385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.110 [2024-06-08 01:00:57.375882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.110 [2024-06-08 01:00:57.384927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.110 [2024-06-08 01:00:57.385684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.110 [2024-06-08 01:00:57.385722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.110 [2024-06-08 01:00:57.385733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.110 [2024-06-08 01:00:57.385968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.110 [2024-06-08 01:00:57.386187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.110 [2024-06-08 01:00:57.386197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.110 [2024-06-08 01:00:57.386204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.110 [2024-06-08 01:00:57.389700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.372 [2024-06-08 01:00:57.398756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.372 [2024-06-08 01:00:57.399502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.372 [2024-06-08 01:00:57.399540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.372 [2024-06-08 01:00:57.399552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.372 [2024-06-08 01:00:57.399789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.372 [2024-06-08 01:00:57.400009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.372 [2024-06-08 01:00:57.400018] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.372 [2024-06-08 01:00:57.400026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.372 [2024-06-08 01:00:57.403525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.372 [2024-06-08 01:00:57.412573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.372 [2024-06-08 01:00:57.413197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.372 [2024-06-08 01:00:57.413235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.372 [2024-06-08 01:00:57.413246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.372 [2024-06-08 01:00:57.413490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.372 [2024-06-08 01:00:57.413711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.372 [2024-06-08 01:00:57.413720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.372 [2024-06-08 01:00:57.413728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.372 [2024-06-08 01:00:57.417218] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.372 [2024-06-08 01:00:57.426671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.372 [2024-06-08 01:00:57.427285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.372 [2024-06-08 01:00:57.427303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.372 [2024-06-08 01:00:57.427310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.372 [2024-06-08 01:00:57.427532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.372 [2024-06-08 01:00:57.427750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.372 [2024-06-08 01:00:57.427758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.372 [2024-06-08 01:00:57.427766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.372 [2024-06-08 01:00:57.431250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.372 [2024-06-08 01:00:57.440499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.372 [2024-06-08 01:00:57.441188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.372 [2024-06-08 01:00:57.441226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.372 [2024-06-08 01:00:57.441239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.372 [2024-06-08 01:00:57.441482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.372 [2024-06-08 01:00:57.441702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.372 [2024-06-08 01:00:57.441712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.372 [2024-06-08 01:00:57.441720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.372 [2024-06-08 01:00:57.445208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.372 [2024-06-08 01:00:57.454255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.372 [2024-06-08 01:00:57.454853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.372 [2024-06-08 01:00:57.454891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.372 [2024-06-08 01:00:57.454902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.372 [2024-06-08 01:00:57.455137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.372 [2024-06-08 01:00:57.455356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.372 [2024-06-08 01:00:57.455366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.372 [2024-06-08 01:00:57.455375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.372 [2024-06-08 01:00:57.458879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.372 [2024-06-08 01:00:57.468132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.372 [2024-06-08 01:00:57.468848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.372 [2024-06-08 01:00:57.468886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.372 [2024-06-08 01:00:57.468899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.372 [2024-06-08 01:00:57.469139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.372 [2024-06-08 01:00:57.469360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.372 [2024-06-08 01:00:57.469369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.372 [2024-06-08 01:00:57.469377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.472873] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.481921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.482530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.373 [2024-06-08 01:00:57.482550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.373 [2024-06-08 01:00:57.482558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.373 [2024-06-08 01:00:57.482776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.373 [2024-06-08 01:00:57.482993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.373 [2024-06-08 01:00:57.483002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.373 [2024-06-08 01:00:57.483009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.486499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.495757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.496481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.373 [2024-06-08 01:00:57.496519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.373 [2024-06-08 01:00:57.496532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.373 [2024-06-08 01:00:57.496768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.373 [2024-06-08 01:00:57.496988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.373 [2024-06-08 01:00:57.496998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.373 [2024-06-08 01:00:57.497006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.500502] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.509552] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.510242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.373 [2024-06-08 01:00:57.510280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.373 [2024-06-08 01:00:57.510291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.373 [2024-06-08 01:00:57.510535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.373 [2024-06-08 01:00:57.510755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.373 [2024-06-08 01:00:57.510765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.373 [2024-06-08 01:00:57.510778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.514269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.523320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.524027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.373 [2024-06-08 01:00:57.524065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.373 [2024-06-08 01:00:57.524077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.373 [2024-06-08 01:00:57.524315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.373 [2024-06-08 01:00:57.524542] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.373 [2024-06-08 01:00:57.524553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.373 [2024-06-08 01:00:57.524562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.528074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.537134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.537856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.373 [2024-06-08 01:00:57.537894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.373 [2024-06-08 01:00:57.537906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.373 [2024-06-08 01:00:57.538143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.373 [2024-06-08 01:00:57.538363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.373 [2024-06-08 01:00:57.538372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.373 [2024-06-08 01:00:57.538379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.541878] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.550930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.551698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.373 [2024-06-08 01:00:57.551736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.373 [2024-06-08 01:00:57.551749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.373 [2024-06-08 01:00:57.551987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.373 [2024-06-08 01:00:57.552206] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.373 [2024-06-08 01:00:57.552216] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.373 [2024-06-08 01:00:57.552223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.555720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.564769] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.565475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.373 [2024-06-08 01:00:57.565514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.373 [2024-06-08 01:00:57.565526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.373 [2024-06-08 01:00:57.565764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.373 [2024-06-08 01:00:57.565984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.373 [2024-06-08 01:00:57.565993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.373 [2024-06-08 01:00:57.566001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.569496] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.578543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.579276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.373 [2024-06-08 01:00:57.579314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.373 [2024-06-08 01:00:57.579327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.373 [2024-06-08 01:00:57.579570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.373 [2024-06-08 01:00:57.579791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.373 [2024-06-08 01:00:57.579800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.373 [2024-06-08 01:00:57.579808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.583294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.592353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.593009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.373 [2024-06-08 01:00:57.593027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.373 [2024-06-08 01:00:57.593035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.373 [2024-06-08 01:00:57.593251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.373 [2024-06-08 01:00:57.593472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.373 [2024-06-08 01:00:57.593481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.373 [2024-06-08 01:00:57.593489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.596973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.606216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.606837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.373 [2024-06-08 01:00:57.606853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.373 [2024-06-08 01:00:57.606861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.373 [2024-06-08 01:00:57.607081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.373 [2024-06-08 01:00:57.607297] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.373 [2024-06-08 01:00:57.607305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.373 [2024-06-08 01:00:57.607312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.373 [2024-06-08 01:00:57.610799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.373 [2024-06-08 01:00:57.620044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.373 [2024-06-08 01:00:57.620523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.374 [2024-06-08 01:00:57.620539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.374 [2024-06-08 01:00:57.620547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.374 [2024-06-08 01:00:57.620762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.374 [2024-06-08 01:00:57.620978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.374 [2024-06-08 01:00:57.620987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.374 [2024-06-08 01:00:57.620994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.374 [2024-06-08 01:00:57.624478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.374 [2024-06-08 01:00:57.633930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.374 [2024-06-08 01:00:57.634517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.374 [2024-06-08 01:00:57.634536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.374 [2024-06-08 01:00:57.634544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.374 [2024-06-08 01:00:57.634761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.374 [2024-06-08 01:00:57.634977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.374 [2024-06-08 01:00:57.634986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.374 [2024-06-08 01:00:57.634993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.374 [2024-06-08 01:00:57.638481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.374 [2024-06-08 01:00:57.647726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.374 [2024-06-08 01:00:57.648461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.374 [2024-06-08 01:00:57.648499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.374 [2024-06-08 01:00:57.648512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.374 [2024-06-08 01:00:57.648750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.374 [2024-06-08 01:00:57.648970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.374 [2024-06-08 01:00:57.648980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.374 [2024-06-08 01:00:57.648992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.374 [2024-06-08 01:00:57.652489] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.636 [2024-06-08 01:00:57.661541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.636 [2024-06-08 01:00:57.662240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.636 [2024-06-08 01:00:57.662278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.636 [2024-06-08 01:00:57.662288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.636 [2024-06-08 01:00:57.662531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.636 [2024-06-08 01:00:57.662751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.636 [2024-06-08 01:00:57.662761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.636 [2024-06-08 01:00:57.662769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.636 [2024-06-08 01:00:57.666261] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.636 [2024-06-08 01:00:57.675312] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.636 [2024-06-08 01:00:57.675992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.636 [2024-06-08 01:00:57.676030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.636 [2024-06-08 01:00:57.676041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.636 [2024-06-08 01:00:57.676276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.636 [2024-06-08 01:00:57.676503] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.636 [2024-06-08 01:00:57.676514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.636 [2024-06-08 01:00:57.676521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.636 [2024-06-08 01:00:57.680010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.636 [2024-06-08 01:00:57.689055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.636 [2024-06-08 01:00:57.689815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.636 [2024-06-08 01:00:57.689854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.636 [2024-06-08 01:00:57.689865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.636 [2024-06-08 01:00:57.690099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.636 [2024-06-08 01:00:57.690319] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.636 [2024-06-08 01:00:57.690328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.636 [2024-06-08 01:00:57.690336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.636 [2024-06-08 01:00:57.693845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.636 [2024-06-08 01:00:57.702894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.636 [2024-06-08 01:00:57.703507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.636 [2024-06-08 01:00:57.703550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.636 [2024-06-08 01:00:57.703563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.636 [2024-06-08 01:00:57.703799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.636 [2024-06-08 01:00:57.704018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.636 [2024-06-08 01:00:57.704028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.636 [2024-06-08 01:00:57.704036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.636 [2024-06-08 01:00:57.707535] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.636 [2024-06-08 01:00:57.716788] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.636 [2024-06-08 01:00:57.717489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.636 [2024-06-08 01:00:57.717528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.636 [2024-06-08 01:00:57.717540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.636 [2024-06-08 01:00:57.717779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.636 [2024-06-08 01:00:57.717999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.636 [2024-06-08 01:00:57.718008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.636 [2024-06-08 01:00:57.718016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.636 [2024-06-08 01:00:57.721513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.636 [2024-06-08 01:00:57.730558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.636 [2024-06-08 01:00:57.731298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.636 [2024-06-08 01:00:57.731336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.636 [2024-06-08 01:00:57.731346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.636 [2024-06-08 01:00:57.731590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.636 [2024-06-08 01:00:57.731810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.636 [2024-06-08 01:00:57.731820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.636 [2024-06-08 01:00:57.731827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.636 [2024-06-08 01:00:57.735315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.636 [2024-06-08 01:00:57.744363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.636 [2024-06-08 01:00:57.744986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.636 [2024-06-08 01:00:57.745005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.636 [2024-06-08 01:00:57.745013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.636 [2024-06-08 01:00:57.745229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.636 [2024-06-08 01:00:57.745457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.636 [2024-06-08 01:00:57.745467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.636 [2024-06-08 01:00:57.745474] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.636 [2024-06-08 01:00:57.748956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.636 [2024-06-08 01:00:57.758202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.636 [2024-06-08 01:00:57.758931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.758968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.637 [2024-06-08 01:00:57.758981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.637 [2024-06-08 01:00:57.759217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.637 [2024-06-08 01:00:57.759445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.637 [2024-06-08 01:00:57.759455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.637 [2024-06-08 01:00:57.759463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.637 [2024-06-08 01:00:57.762955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.637 [2024-06-08 01:00:57.772005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.637 [2024-06-08 01:00:57.772643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.772662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.637 [2024-06-08 01:00:57.772669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.637 [2024-06-08 01:00:57.772886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.637 [2024-06-08 01:00:57.773102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.637 [2024-06-08 01:00:57.773111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.637 [2024-06-08 01:00:57.773118] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.637 [2024-06-08 01:00:57.776605] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.637 [2024-06-08 01:00:57.785853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.637 [2024-06-08 01:00:57.786482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.786499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.637 [2024-06-08 01:00:57.786507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.637 [2024-06-08 01:00:57.786722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.637 [2024-06-08 01:00:57.786938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.637 [2024-06-08 01:00:57.786946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.637 [2024-06-08 01:00:57.786953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.637 [2024-06-08 01:00:57.790453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.637 [2024-06-08 01:00:57.799702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.637 [2024-06-08 01:00:57.800332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.800348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.637 [2024-06-08 01:00:57.800356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.637 [2024-06-08 01:00:57.800576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.637 [2024-06-08 01:00:57.800792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.637 [2024-06-08 01:00:57.800801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.637 [2024-06-08 01:00:57.800807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.637 [2024-06-08 01:00:57.804287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.637 [2024-06-08 01:00:57.813535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.637 [2024-06-08 01:00:57.814251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.814288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.637 [2024-06-08 01:00:57.814299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.637 [2024-06-08 01:00:57.814542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.637 [2024-06-08 01:00:57.814763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.637 [2024-06-08 01:00:57.814773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.637 [2024-06-08 01:00:57.814781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.637 [2024-06-08 01:00:57.818271] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.637 [2024-06-08 01:00:57.827317] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.637 [2024-06-08 01:00:57.828052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.828089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.637 [2024-06-08 01:00:57.828100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.637 [2024-06-08 01:00:57.828334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.637 [2024-06-08 01:00:57.828561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.637 [2024-06-08 01:00:57.828571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.637 [2024-06-08 01:00:57.828579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.637 [2024-06-08 01:00:57.832067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.637 [2024-06-08 01:00:57.841112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.637 [2024-06-08 01:00:57.841747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.841768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.637 [2024-06-08 01:00:57.841780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.637 [2024-06-08 01:00:57.841998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.637 [2024-06-08 01:00:57.842214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.637 [2024-06-08 01:00:57.842223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.637 [2024-06-08 01:00:57.842230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.637 [2024-06-08 01:00:57.845718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.637 [2024-06-08 01:00:57.854964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.637 [2024-06-08 01:00:57.855667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.855705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.637 [2024-06-08 01:00:57.855715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.637 [2024-06-08 01:00:57.855950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.637 [2024-06-08 01:00:57.856170] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.637 [2024-06-08 01:00:57.856179] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.637 [2024-06-08 01:00:57.856187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.637 [2024-06-08 01:00:57.859682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.637 [2024-06-08 01:00:57.868730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.637 [2024-06-08 01:00:57.869461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.869499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.637 [2024-06-08 01:00:57.869512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.637 [2024-06-08 01:00:57.869748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.637 [2024-06-08 01:00:57.869968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.637 [2024-06-08 01:00:57.869978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.637 [2024-06-08 01:00:57.869986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.637 [2024-06-08 01:00:57.873486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.637 [2024-06-08 01:00:57.882532] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.637 [2024-06-08 01:00:57.883173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.883192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.637 [2024-06-08 01:00:57.883201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.637 [2024-06-08 01:00:57.883423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.637 [2024-06-08 01:00:57.883641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.637 [2024-06-08 01:00:57.883655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.637 [2024-06-08 01:00:57.883663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.637 [2024-06-08 01:00:57.887145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.637 [2024-06-08 01:00:57.896407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.637 [2024-06-08 01:00:57.897089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.637 [2024-06-08 01:00:57.897127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.638 [2024-06-08 01:00:57.897138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.638 [2024-06-08 01:00:57.897372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.638 [2024-06-08 01:00:57.897600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.638 [2024-06-08 01:00:57.897611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.638 [2024-06-08 01:00:57.897618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.638 [2024-06-08 01:00:57.901106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.638 [2024-06-08 01:00:57.910154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.638 [2024-06-08 01:00:57.910871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.638 [2024-06-08 01:00:57.910910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.638 [2024-06-08 01:00:57.910920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.638 [2024-06-08 01:00:57.911155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.638 [2024-06-08 01:00:57.911374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.638 [2024-06-08 01:00:57.911384] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.638 [2024-06-08 01:00:57.911392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.638 [2024-06-08 01:00:57.914889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.900 [2024-06-08 01:00:57.923939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.900 [2024-06-08 01:00:57.924691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.900 [2024-06-08 01:00:57.924730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.900 [2024-06-08 01:00:57.924740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.900 [2024-06-08 01:00:57.924975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.900 [2024-06-08 01:00:57.925195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.900 [2024-06-08 01:00:57.925205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.900 [2024-06-08 01:00:57.925213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.900 [2024-06-08 01:00:57.928708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.900 [2024-06-08 01:00:57.937761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.900 [2024-06-08 01:00:57.938348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.900 [2024-06-08 01:00:57.938367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.900 [2024-06-08 01:00:57.938374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.900 [2024-06-08 01:00:57.938595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.900 [2024-06-08 01:00:57.938812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.900 [2024-06-08 01:00:57.938821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.900 [2024-06-08 01:00:57.938828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.900 [2024-06-08 01:00:57.942310] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.900 [2024-06-08 01:00:57.951558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.900 [2024-06-08 01:00:57.952278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.900 [2024-06-08 01:00:57.952315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.900 [2024-06-08 01:00:57.952326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.900 [2024-06-08 01:00:57.952568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.900 [2024-06-08 01:00:57.952788] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.900 [2024-06-08 01:00:57.952798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.900 [2024-06-08 01:00:57.952806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.900 [2024-06-08 01:00:57.956292] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.900 [2024-06-08 01:00:57.965337] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.900 [2024-06-08 01:00:57.965964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.900 [2024-06-08 01:00:57.965982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.900 [2024-06-08 01:00:57.965990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.900 [2024-06-08 01:00:57.966206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.900 [2024-06-08 01:00:57.966426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.900 [2024-06-08 01:00:57.966435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.900 [2024-06-08 01:00:57.966442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.900 [2024-06-08 01:00:57.969924] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.900 [2024-06-08 01:00:57.979173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.900 [2024-06-08 01:00:57.979796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.900 [2024-06-08 01:00:57.979812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.900 [2024-06-08 01:00:57.979820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.900 [2024-06-08 01:00:57.980040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.900 [2024-06-08 01:00:57.980256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.900 [2024-06-08 01:00:57.980264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.900 [2024-06-08 01:00:57.980271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.900 [2024-06-08 01:00:57.983755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.900 [2024-06-08 01:00:57.993013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.900 [2024-06-08 01:00:57.993503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.900 [2024-06-08 01:00:57.993522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.900 [2024-06-08 01:00:57.993530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.900 [2024-06-08 01:00:57.993746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.900 [2024-06-08 01:00:57.993962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:57.993971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:57.993978] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:57.997465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.006915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.901 [2024-06-08 01:00:58.007645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.901 [2024-06-08 01:00:58.007683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.901 [2024-06-08 01:00:58.007693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.901 [2024-06-08 01:00:58.007928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.901 [2024-06-08 01:00:58.008147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:58.008157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:58.008165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:58.011662] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.020711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.901 [2024-06-08 01:00:58.021421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.901 [2024-06-08 01:00:58.021460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.901 [2024-06-08 01:00:58.021471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.901 [2024-06-08 01:00:58.021706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.901 [2024-06-08 01:00:58.021925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:58.021934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:58.021946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:58.025443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.034491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.901 [2024-06-08 01:00:58.035237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.901 [2024-06-08 01:00:58.035274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.901 [2024-06-08 01:00:58.035285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.901 [2024-06-08 01:00:58.035527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.901 [2024-06-08 01:00:58.035748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:58.035758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:58.035765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:58.039253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.048301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.901 [2024-06-08 01:00:58.048977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.901 [2024-06-08 01:00:58.049015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.901 [2024-06-08 01:00:58.049026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.901 [2024-06-08 01:00:58.049260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.901 [2024-06-08 01:00:58.049488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:58.049499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:58.049506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:58.052996] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.062041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.901 [2024-06-08 01:00:58.062766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.901 [2024-06-08 01:00:58.062805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.901 [2024-06-08 01:00:58.062816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.901 [2024-06-08 01:00:58.063051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.901 [2024-06-08 01:00:58.063273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:58.063283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:58.063291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:58.066789] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.075837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.901 [2024-06-08 01:00:58.076506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.901 [2024-06-08 01:00:58.076544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.901 [2024-06-08 01:00:58.076557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.901 [2024-06-08 01:00:58.076793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.901 [2024-06-08 01:00:58.077013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:58.077022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:58.077030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:58.080528] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.089580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.901 [2024-06-08 01:00:58.090113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.901 [2024-06-08 01:00:58.090131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.901 [2024-06-08 01:00:58.090139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.901 [2024-06-08 01:00:58.090355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.901 [2024-06-08 01:00:58.090586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:58.090595] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:58.090602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:58.094086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.103332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.901 [2024-06-08 01:00:58.104047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.901 [2024-06-08 01:00:58.104085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.901 [2024-06-08 01:00:58.104096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.901 [2024-06-08 01:00:58.104331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.901 [2024-06-08 01:00:58.104559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:58.104569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:58.104577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:58.108065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.117110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.901 [2024-06-08 01:00:58.117828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.901 [2024-06-08 01:00:58.117866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.901 [2024-06-08 01:00:58.117876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.901 [2024-06-08 01:00:58.118111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.901 [2024-06-08 01:00:58.118335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:58.118345] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:58.118353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:58.121850] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.130986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.901 [2024-06-08 01:00:58.131705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.901 [2024-06-08 01:00:58.131742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.901 [2024-06-08 01:00:58.131753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.901 [2024-06-08 01:00:58.131988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.901 [2024-06-08 01:00:58.132207] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.901 [2024-06-08 01:00:58.132217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.901 [2024-06-08 01:00:58.132225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.901 [2024-06-08 01:00:58.135722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.901 [2024-06-08 01:00:58.144772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.902 [2024-06-08 01:00:58.145313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.902 [2024-06-08 01:00:58.145332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.902 [2024-06-08 01:00:58.145340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.902 [2024-06-08 01:00:58.145560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.902 [2024-06-08 01:00:58.145777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.902 [2024-06-08 01:00:58.145786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.902 [2024-06-08 01:00:58.145793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.902 [2024-06-08 01:00:58.149274] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.902 [2024-06-08 01:00:58.158522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.902 [2024-06-08 01:00:58.159123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.902 [2024-06-08 01:00:58.159139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.902 [2024-06-08 01:00:58.159147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.902 [2024-06-08 01:00:58.159362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.902 [2024-06-08 01:00:58.159584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.902 [2024-06-08 01:00:58.159593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.902 [2024-06-08 01:00:58.159600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.902 [2024-06-08 01:00:58.163112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.902 [2024-06-08 01:00:58.172356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.902 [2024-06-08 01:00:58.173060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.902 [2024-06-08 01:00:58.173098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:39.902 [2024-06-08 01:00:58.173111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:39.902 [2024-06-08 01:00:58.173348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:39.902 [2024-06-08 01:00:58.173575] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.902 [2024-06-08 01:00:58.173586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.902 [2024-06-08 01:00:58.173594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.902 [2024-06-08 01:00:58.177081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.164 [2024-06-08 01:00:58.186132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.164 [2024-06-08 01:00:58.186860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.164 [2024-06-08 01:00:58.186898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.164 [2024-06-08 01:00:58.186910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.164 [2024-06-08 01:00:58.187146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.164 [2024-06-08 01:00:58.187366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.164 [2024-06-08 01:00:58.187376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.164 [2024-06-08 01:00:58.187384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.164 [2024-06-08 01:00:58.190891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.164 [2024-06-08 01:00:58.199943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.164 [2024-06-08 01:00:58.200718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.164 [2024-06-08 01:00:58.200756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.164 [2024-06-08 01:00:58.200767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.164 [2024-06-08 01:00:58.201002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.164 [2024-06-08 01:00:58.201222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.164 [2024-06-08 01:00:58.201231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.164 [2024-06-08 01:00:58.201239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.164 [2024-06-08 01:00:58.204732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.164 [2024-06-08 01:00:58.213782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.164 [2024-06-08 01:00:58.214296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.164 [2024-06-08 01:00:58.214318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.164 [2024-06-08 01:00:58.214327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.164 [2024-06-08 01:00:58.214550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.164 [2024-06-08 01:00:58.214767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.164 [2024-06-08 01:00:58.214775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.164 [2024-06-08 01:00:58.214782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.164 [2024-06-08 01:00:58.218266] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.164 [2024-06-08 01:00:58.227517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.164 [2024-06-08 01:00:58.228179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.164 [2024-06-08 01:00:58.228217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.164 [2024-06-08 01:00:58.228228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.164 [2024-06-08 01:00:58.228471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.164 [2024-06-08 01:00:58.228692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.164 [2024-06-08 01:00:58.228702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.164 [2024-06-08 01:00:58.228710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.164 [2024-06-08 01:00:58.232199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.164 [2024-06-08 01:00:58.241247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.164 [2024-06-08 01:00:58.241992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.164 [2024-06-08 01:00:58.242030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.164 [2024-06-08 01:00:58.242041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.164 [2024-06-08 01:00:58.242276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.164 [2024-06-08 01:00:58.242508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.164 [2024-06-08 01:00:58.242519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.164 [2024-06-08 01:00:58.242526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.164 [2024-06-08 01:00:58.246017] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.164 [2024-06-08 01:00:58.255068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.164 [2024-06-08 01:00:58.255776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.164 [2024-06-08 01:00:58.255813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.164 [2024-06-08 01:00:58.255824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.164 [2024-06-08 01:00:58.256059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.164 [2024-06-08 01:00:58.256283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.164 [2024-06-08 01:00:58.256293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.164 [2024-06-08 01:00:58.256301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.164 [2024-06-08 01:00:58.259801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.164 [2024-06-08 01:00:58.268845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.269587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.269625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.269636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.165 [2024-06-08 01:00:58.269871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.165 [2024-06-08 01:00:58.270091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.165 [2024-06-08 01:00:58.270101] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.165 [2024-06-08 01:00:58.270108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.165 [2024-06-08 01:00:58.273609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.165 [2024-06-08 01:00:58.282652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.283413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.283450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.283462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.165 [2024-06-08 01:00:58.283699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.165 [2024-06-08 01:00:58.283919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.165 [2024-06-08 01:00:58.283928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.165 [2024-06-08 01:00:58.283935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.165 [2024-06-08 01:00:58.287429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.165 [2024-06-08 01:00:58.296503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.297244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.297282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.297292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.165 [2024-06-08 01:00:58.297537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.165 [2024-06-08 01:00:58.297758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.165 [2024-06-08 01:00:58.297767] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.165 [2024-06-08 01:00:58.297775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.165 [2024-06-08 01:00:58.301262] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.165 [2024-06-08 01:00:58.310314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.310953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.310972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.310980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.165 [2024-06-08 01:00:58.311195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.165 [2024-06-08 01:00:58.311418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.165 [2024-06-08 01:00:58.311428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.165 [2024-06-08 01:00:58.311435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.165 [2024-06-08 01:00:58.314918] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.165 [2024-06-08 01:00:58.324159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.324814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.324852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.324863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.165 [2024-06-08 01:00:58.325098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.165 [2024-06-08 01:00:58.325317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.165 [2024-06-08 01:00:58.325327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.165 [2024-06-08 01:00:58.325334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.165 [2024-06-08 01:00:58.328833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.165 [2024-06-08 01:00:58.338081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.338769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.338807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.338818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.165 [2024-06-08 01:00:58.339053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.165 [2024-06-08 01:00:58.339273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.165 [2024-06-08 01:00:58.339283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.165 [2024-06-08 01:00:58.339290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.165 [2024-06-08 01:00:58.342789] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.165 [2024-06-08 01:00:58.351836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.352579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.352617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.352632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.165 [2024-06-08 01:00:58.352867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.165 [2024-06-08 01:00:58.353087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.165 [2024-06-08 01:00:58.353096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.165 [2024-06-08 01:00:58.353104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.165 [2024-06-08 01:00:58.356601] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.165 [2024-06-08 01:00:58.365646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.366368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.366413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.366424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.165 [2024-06-08 01:00:58.366659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.165 [2024-06-08 01:00:58.366879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.165 [2024-06-08 01:00:58.366889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.165 [2024-06-08 01:00:58.366897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.165 [2024-06-08 01:00:58.370389] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.165 [2024-06-08 01:00:58.379438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.380180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.380217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.380228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.165 [2024-06-08 01:00:58.380473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.165 [2024-06-08 01:00:58.380694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.165 [2024-06-08 01:00:58.380704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.165 [2024-06-08 01:00:58.380711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.165 [2024-06-08 01:00:58.384201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.165 [2024-06-08 01:00:58.393261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.393932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.393970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.393980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.165 [2024-06-08 01:00:58.394215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.165 [2024-06-08 01:00:58.394445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.165 [2024-06-08 01:00:58.394460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.165 [2024-06-08 01:00:58.394467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.165 [2024-06-08 01:00:58.397958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.165 [2024-06-08 01:00:58.407015] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.165 [2024-06-08 01:00:58.407716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.165 [2024-06-08 01:00:58.407754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.165 [2024-06-08 01:00:58.407765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.166 [2024-06-08 01:00:58.407999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.166 [2024-06-08 01:00:58.408219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.166 [2024-06-08 01:00:58.408229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.166 [2024-06-08 01:00:58.408236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.166 [2024-06-08 01:00:58.411738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.166 [2024-06-08 01:00:58.420841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.166 [2024-06-08 01:00:58.421586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.166 [2024-06-08 01:00:58.421623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.166 [2024-06-08 01:00:58.421634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.166 [2024-06-08 01:00:58.421868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.166 [2024-06-08 01:00:58.422088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.166 [2024-06-08 01:00:58.422099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.166 [2024-06-08 01:00:58.422106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.166 [2024-06-08 01:00:58.425607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.166 [2024-06-08 01:00:58.434655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.166 [2024-06-08 01:00:58.435420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.166 [2024-06-08 01:00:58.435458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.166 [2024-06-08 01:00:58.435470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.166 [2024-06-08 01:00:58.435709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.166 [2024-06-08 01:00:58.435928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.166 [2024-06-08 01:00:58.435939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.166 [2024-06-08 01:00:58.435947] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.166 [2024-06-08 01:00:58.439448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.428 [2024-06-08 01:00:58.448507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.428 [2024-06-08 01:00:58.449159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.428 [2024-06-08 01:00:58.449176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.428 [2024-06-08 01:00:58.449184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.428 [2024-06-08 01:00:58.449408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.428 [2024-06-08 01:00:58.449625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.428 [2024-06-08 01:00:58.449634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.428 [2024-06-08 01:00:58.449641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.428 [2024-06-08 01:00:58.453128] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.428 [2024-06-08 01:00:58.462369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.428 [2024-06-08 01:00:58.462960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.428 [2024-06-08 01:00:58.462976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.428 [2024-06-08 01:00:58.462983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.428 [2024-06-08 01:00:58.463199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.428 [2024-06-08 01:00:58.463421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.428 [2024-06-08 01:00:58.463430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.428 [2024-06-08 01:00:58.463437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.428 [2024-06-08 01:00:58.466918] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.428 [2024-06-08 01:00:58.476157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.428 [2024-06-08 01:00:58.476740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.428 [2024-06-08 01:00:58.476757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.428 [2024-06-08 01:00:58.476764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.428 [2024-06-08 01:00:58.476980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.428 [2024-06-08 01:00:58.477196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.428 [2024-06-08 01:00:58.477205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.428 [2024-06-08 01:00:58.477212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.428 [2024-06-08 01:00:58.480698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.428 [2024-06-08 01:00:58.489941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.428 [2024-06-08 01:00:58.490584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.428 [2024-06-08 01:00:58.490600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.428 [2024-06-08 01:00:58.490608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.428 [2024-06-08 01:00:58.490827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.428 [2024-06-08 01:00:58.491043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.428 [2024-06-08 01:00:58.491052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.428 [2024-06-08 01:00:58.491059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.428 [2024-06-08 01:00:58.494556] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.428 [2024-06-08 01:00:58.503797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.428 [2024-06-08 01:00:58.504512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.428 [2024-06-08 01:00:58.504549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.428 [2024-06-08 01:00:58.504561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.428 [2024-06-08 01:00:58.504798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.428 [2024-06-08 01:00:58.505018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.428 [2024-06-08 01:00:58.505027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.428 [2024-06-08 01:00:58.505035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.428 [2024-06-08 01:00:58.508536] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.428 [2024-06-08 01:00:58.517583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.428 [2024-06-08 01:00:58.518323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.428 [2024-06-08 01:00:58.518360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.428 [2024-06-08 01:00:58.518371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.428 [2024-06-08 01:00:58.518618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.428 [2024-06-08 01:00:58.518838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.428 [2024-06-08 01:00:58.518848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.428 [2024-06-08 01:00:58.518855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.428 [2024-06-08 01:00:58.522343] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.428 [2024-06-08 01:00:58.531388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.428 [2024-06-08 01:00:58.532132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.428 [2024-06-08 01:00:58.532170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.532183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.532429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.532649] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.429 [2024-06-08 01:00:58.532659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.429 [2024-06-08 01:00:58.532670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.429 [2024-06-08 01:00:58.536160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.429 [2024-06-08 01:00:58.545206] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.429 [2024-06-08 01:00:58.545923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.429 [2024-06-08 01:00:58.545961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.545972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.546207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.546437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.429 [2024-06-08 01:00:58.546447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.429 [2024-06-08 01:00:58.546455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.429 [2024-06-08 01:00:58.549944] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.429 [2024-06-08 01:00:58.558995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.429 [2024-06-08 01:00:58.559731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.429 [2024-06-08 01:00:58.559769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.559780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.560014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.560234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.429 [2024-06-08 01:00:58.560244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.429 [2024-06-08 01:00:58.560251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.429 [2024-06-08 01:00:58.563748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.429 [2024-06-08 01:00:58.572794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.429 [2024-06-08 01:00:58.573504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.429 [2024-06-08 01:00:58.573542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.573555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.573791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.574011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.429 [2024-06-08 01:00:58.574021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.429 [2024-06-08 01:00:58.574028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.429 [2024-06-08 01:00:58.577525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.429 [2024-06-08 01:00:58.586584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.429 [2024-06-08 01:00:58.587316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.429 [2024-06-08 01:00:58.587358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.587370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.587616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.587837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.429 [2024-06-08 01:00:58.587847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.429 [2024-06-08 01:00:58.587855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.429 [2024-06-08 01:00:58.591343] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.429 [2024-06-08 01:00:58.600403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.429 [2024-06-08 01:00:58.601137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.429 [2024-06-08 01:00:58.601175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.601186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.601432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.601653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.429 [2024-06-08 01:00:58.601662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.429 [2024-06-08 01:00:58.601670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.429 [2024-06-08 01:00:58.605157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.429 [2024-06-08 01:00:58.614201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.429 [2024-06-08 01:00:58.614855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.429 [2024-06-08 01:00:58.614874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.614882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.615099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.615315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.429 [2024-06-08 01:00:58.615324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.429 [2024-06-08 01:00:58.615330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.429 [2024-06-08 01:00:58.618818] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.429 [2024-06-08 01:00:58.628074] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.429 [2024-06-08 01:00:58.628797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.429 [2024-06-08 01:00:58.628834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.628845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.629080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.629304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.429 [2024-06-08 01:00:58.629314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.429 [2024-06-08 01:00:58.629322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.429 [2024-06-08 01:00:58.632826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.429 [2024-06-08 01:00:58.641877] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.429 [2024-06-08 01:00:58.642617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.429 [2024-06-08 01:00:58.642655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.642665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.642900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.643120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.429 [2024-06-08 01:00:58.643129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.429 [2024-06-08 01:00:58.643136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.429 [2024-06-08 01:00:58.646634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.429 [2024-06-08 01:00:58.655677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.429 [2024-06-08 01:00:58.656424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.429 [2024-06-08 01:00:58.656462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.656474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.656711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.656931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.429 [2024-06-08 01:00:58.656941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.429 [2024-06-08 01:00:58.656948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.429 [2024-06-08 01:00:58.660438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.429 [2024-06-08 01:00:58.669481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.429 [2024-06-08 01:00:58.670070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.429 [2024-06-08 01:00:58.670107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.429 [2024-06-08 01:00:58.670117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.429 [2024-06-08 01:00:58.670352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.429 [2024-06-08 01:00:58.670580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.430 [2024-06-08 01:00:58.670591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.430 [2024-06-08 01:00:58.670598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.430 [2024-06-08 01:00:58.674090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.430 [2024-06-08 01:00:58.683349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.430 [2024-06-08 01:00:58.684079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.430 [2024-06-08 01:00:58.684117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.430 [2024-06-08 01:00:58.684128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.430 [2024-06-08 01:00:58.684363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.430 [2024-06-08 01:00:58.684594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.430 [2024-06-08 01:00:58.684604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.430 [2024-06-08 01:00:58.684612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.430 [2024-06-08 01:00:58.688102] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.430 [2024-06-08 01:00:58.697157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.430 [2024-06-08 01:00:58.697881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.430 [2024-06-08 01:00:58.697919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.430 [2024-06-08 01:00:58.697930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.430 [2024-06-08 01:00:58.698164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.430 [2024-06-08 01:00:58.698384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.430 [2024-06-08 01:00:58.698393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.430 [2024-06-08 01:00:58.698411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.430 [2024-06-08 01:00:58.701901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.692 [2024-06-08 01:00:58.710960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.692 [2024-06-08 01:00:58.711670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.692 [2024-06-08 01:00:58.711709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.692 [2024-06-08 01:00:58.711720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.692 [2024-06-08 01:00:58.711955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.692 [2024-06-08 01:00:58.712174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.692 [2024-06-08 01:00:58.712183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.692 [2024-06-08 01:00:58.712191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.692 [2024-06-08 01:00:58.715694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.692 [2024-06-08 01:00:58.724740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.692 [2024-06-08 01:00:58.725374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.692 [2024-06-08 01:00:58.725392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.692 [2024-06-08 01:00:58.725412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.692 [2024-06-08 01:00:58.725629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.692 [2024-06-08 01:00:58.725846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.692 [2024-06-08 01:00:58.725854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.725861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.729341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.738626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.693 [2024-06-08 01:00:58.739265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.693 [2024-06-08 01:00:58.739281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.693 [2024-06-08 01:00:58.739289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.693 [2024-06-08 01:00:58.739511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.693 [2024-06-08 01:00:58.739728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.693 [2024-06-08 01:00:58.739737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.739744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.743225] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.752471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.693 [2024-06-08 01:00:58.753172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.693 [2024-06-08 01:00:58.753210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.693 [2024-06-08 01:00:58.753221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.693 [2024-06-08 01:00:58.753466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.693 [2024-06-08 01:00:58.753687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.693 [2024-06-08 01:00:58.753697] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.753704] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.757194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.766239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.693 [2024-06-08 01:00:58.766960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.693 [2024-06-08 01:00:58.766998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.693 [2024-06-08 01:00:58.767009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.693 [2024-06-08 01:00:58.767244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.693 [2024-06-08 01:00:58.767474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.693 [2024-06-08 01:00:58.767493] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.767501] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.770993] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.780035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.693 [2024-06-08 01:00:58.780757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.693 [2024-06-08 01:00:58.780795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.693 [2024-06-08 01:00:58.780806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.693 [2024-06-08 01:00:58.781041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.693 [2024-06-08 01:00:58.781261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.693 [2024-06-08 01:00:58.781270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.781278] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.784776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.793827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.693 [2024-06-08 01:00:58.794464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.693 [2024-06-08 01:00:58.794484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.693 [2024-06-08 01:00:58.794492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.693 [2024-06-08 01:00:58.794708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.693 [2024-06-08 01:00:58.794924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.693 [2024-06-08 01:00:58.794933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.794940] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.798430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.807673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.693 [2024-06-08 01:00:58.808358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.693 [2024-06-08 01:00:58.808395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.693 [2024-06-08 01:00:58.808418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.693 [2024-06-08 01:00:58.808654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.693 [2024-06-08 01:00:58.808873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.693 [2024-06-08 01:00:58.808882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.808890] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.812377] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.821426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.693 [2024-06-08 01:00:58.822175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.693 [2024-06-08 01:00:58.822212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.693 [2024-06-08 01:00:58.822223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.693 [2024-06-08 01:00:58.822467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.693 [2024-06-08 01:00:58.822688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.693 [2024-06-08 01:00:58.822698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.822705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.826192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.835234] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.693 [2024-06-08 01:00:58.835971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.693 [2024-06-08 01:00:58.836009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.693 [2024-06-08 01:00:58.836020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.693 [2024-06-08 01:00:58.836255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.693 [2024-06-08 01:00:58.836485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.693 [2024-06-08 01:00:58.836495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.836502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.839991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.849033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.693 [2024-06-08 01:00:58.849736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.693 [2024-06-08 01:00:58.849774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.693 [2024-06-08 01:00:58.849785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.693 [2024-06-08 01:00:58.850019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.693 [2024-06-08 01:00:58.850239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.693 [2024-06-08 01:00:58.850248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.850256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.853757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.862814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.693 [2024-06-08 01:00:58.863466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.693 [2024-06-08 01:00:58.863486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.693 [2024-06-08 01:00:58.863499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.693 [2024-06-08 01:00:58.863716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.693 [2024-06-08 01:00:58.863932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.693 [2024-06-08 01:00:58.863941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.693 [2024-06-08 01:00:58.863948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.693 [2024-06-08 01:00:58.867518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.693 [2024-06-08 01:00:58.876569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.694 [2024-06-08 01:00:58.877253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.694 [2024-06-08 01:00:58.877291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.694 [2024-06-08 01:00:58.877302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.694 [2024-06-08 01:00:58.877547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.694 [2024-06-08 01:00:58.877767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.694 [2024-06-08 01:00:58.877776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.694 [2024-06-08 01:00:58.877784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.694 [2024-06-08 01:00:58.881272] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.694 [2024-06-08 01:00:58.890315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.694 [2024-06-08 01:00:58.891037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.694 [2024-06-08 01:00:58.891076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.694 [2024-06-08 01:00:58.891086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.694 [2024-06-08 01:00:58.891321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.694 [2024-06-08 01:00:58.891559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.694 [2024-06-08 01:00:58.891570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.694 [2024-06-08 01:00:58.891577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.694 [2024-06-08 01:00:58.895069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.694 [2024-06-08 01:00:58.904112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.694 [2024-06-08 01:00:58.904871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.694 [2024-06-08 01:00:58.904909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.694 [2024-06-08 01:00:58.904920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.694 [2024-06-08 01:00:58.905155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.694 [2024-06-08 01:00:58.905375] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.694 [2024-06-08 01:00:58.905389] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.694 [2024-06-08 01:00:58.905396] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.694 [2024-06-08 01:00:58.908898] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.694 [2024-06-08 01:00:58.917944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.694 [2024-06-08 01:00:58.918581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.694 [2024-06-08 01:00:58.918619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.694 [2024-06-08 01:00:58.918629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.694 [2024-06-08 01:00:58.918864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.694 [2024-06-08 01:00:58.919084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.694 [2024-06-08 01:00:58.919094] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.694 [2024-06-08 01:00:58.919102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.694 [2024-06-08 01:00:58.922601] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.694 [2024-06-08 01:00:58.931859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.694 [2024-06-08 01:00:58.932524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.694 [2024-06-08 01:00:58.932562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.694 [2024-06-08 01:00:58.932574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.694 [2024-06-08 01:00:58.932810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.694 [2024-06-08 01:00:58.933030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.694 [2024-06-08 01:00:58.933040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.694 [2024-06-08 01:00:58.933048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.694 [2024-06-08 01:00:58.936544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.694 [2024-06-08 01:00:58.945590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.694 [2024-06-08 01:00:58.946327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.694 [2024-06-08 01:00:58.946365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.694 [2024-06-08 01:00:58.946378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.694 [2024-06-08 01:00:58.946625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.694 [2024-06-08 01:00:58.946846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.694 [2024-06-08 01:00:58.946856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.694 [2024-06-08 01:00:58.946863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.694 [2024-06-08 01:00:58.950350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.694 [2024-06-08 01:00:58.959392] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.694 [2024-06-08 01:00:58.960093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.694 [2024-06-08 01:00:58.960131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.694 [2024-06-08 01:00:58.960142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.694 [2024-06-08 01:00:58.960377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.694 [2024-06-08 01:00:58.960608] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.694 [2024-06-08 01:00:58.960618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.694 [2024-06-08 01:00:58.960626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.694 [2024-06-08 01:00:58.964114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.694 [2024-06-08 01:00:58.973164] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.694 [2024-06-08 01:00:58.973891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.694 [2024-06-08 01:00:58.973928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.694 [2024-06-08 01:00:58.973939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.694 [2024-06-08 01:00:58.974173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.694 [2024-06-08 01:00:58.974392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.694 [2024-06-08 01:00:58.974413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.694 [2024-06-08 01:00:58.974421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.957 [2024-06-08 01:00:58.977912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.957 [2024-06-08 01:00:58.986962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.957 [2024-06-08 01:00:58.987692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.957 [2024-06-08 01:00:58.987730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.957 [2024-06-08 01:00:58.987740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.957 [2024-06-08 01:00:58.987975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.957 [2024-06-08 01:00:58.988195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.957 [2024-06-08 01:00:58.988205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.957 [2024-06-08 01:00:58.988212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.957 [2024-06-08 01:00:58.991720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.957 [2024-06-08 01:00:59.000768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.957 [2024-06-08 01:00:59.001523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.957 [2024-06-08 01:00:59.001561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.957 [2024-06-08 01:00:59.001572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.957 [2024-06-08 01:00:59.001811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.957 [2024-06-08 01:00:59.002031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.957 [2024-06-08 01:00:59.002041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.957 [2024-06-08 01:00:59.002048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.957 [2024-06-08 01:00:59.005549] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.957 [2024-06-08 01:00:59.014591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.957 [2024-06-08 01:00:59.015332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.957 [2024-06-08 01:00:59.015370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.957 [2024-06-08 01:00:59.015380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.957 [2024-06-08 01:00:59.015624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.957 [2024-06-08 01:00:59.015845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.957 [2024-06-08 01:00:59.015855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.957 [2024-06-08 01:00:59.015863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.957 [2024-06-08 01:00:59.019355] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.957 [2024-06-08 01:00:59.028398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.957 [2024-06-08 01:00:59.029138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.957 [2024-06-08 01:00:59.029176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.957 [2024-06-08 01:00:59.029187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.957 [2024-06-08 01:00:59.029432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.957 [2024-06-08 01:00:59.029653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.957 [2024-06-08 01:00:59.029662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.957 [2024-06-08 01:00:59.029670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.957 [2024-06-08 01:00:59.033159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.957 [2024-06-08 01:00:59.042203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.957 [2024-06-08 01:00:59.042861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.957 [2024-06-08 01:00:59.042898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.957 [2024-06-08 01:00:59.042909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.957 [2024-06-08 01:00:59.043143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.958 [2024-06-08 01:00:59.043363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.958 [2024-06-08 01:00:59.043373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.958 [2024-06-08 01:00:59.043385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.958 [2024-06-08 01:00:59.046884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 669769 Killed "${NVMF_APP[@]}" "$@" 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.958 [2024-06-08 01:00:59.055937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.958 [2024-06-08 01:00:59.056685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.958 [2024-06-08 01:00:59.056723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.958 [2024-06-08 01:00:59.056734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.958 [2024-06-08 01:00:59.056969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.958 [2024-06-08 01:00:59.057189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.958 [2024-06-08 01:00:59.057199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.958 [2024-06-08 01:00:59.057207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=671358 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 671358 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 671358 ']' 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:40.958 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.958 [2024-06-08 01:00:59.060702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.958 [2024-06-08 01:00:59.069753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.958 [2024-06-08 01:00:59.070341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.958 [2024-06-08 01:00:59.070379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.958 [2024-06-08 01:00:59.070391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.958 [2024-06-08 01:00:59.070636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.958 [2024-06-08 01:00:59.070856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.958 [2024-06-08 01:00:59.070865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.958 [2024-06-08 01:00:59.070874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.958 [2024-06-08 01:00:59.074368] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.958 [2024-06-08 01:00:59.083632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.958 [2024-06-08 01:00:59.084351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.958 [2024-06-08 01:00:59.084388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.958 [2024-06-08 01:00:59.084400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.958 [2024-06-08 01:00:59.084646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.958 [2024-06-08 01:00:59.084865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.958 [2024-06-08 01:00:59.084874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.958 [2024-06-08 01:00:59.084881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.958 [2024-06-08 01:00:59.088370] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.958 [2024-06-08 01:00:59.097436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.958 [2024-06-08 01:00:59.098052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.958 [2024-06-08 01:00:59.098090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.958 [2024-06-08 01:00:59.098101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.958 [2024-06-08 01:00:59.098335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.958 [2024-06-08 01:00:59.098562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.958 [2024-06-08 01:00:59.098571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.958 [2024-06-08 01:00:59.098578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.958 [2024-06-08 01:00:59.102071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.958 [2024-06-08 01:00:59.111321] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.958 [2024-06-08 01:00:59.111977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.958 [2024-06-08 01:00:59.111996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.958 [2024-06-08 01:00:59.112004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.958 [2024-06-08 01:00:59.112219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.958 [2024-06-08 01:00:59.112442] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.958 [2024-06-08 01:00:59.112450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.958 [2024-06-08 01:00:59.112457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.958 [2024-06-08 01:00:59.115942] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.958 [2024-06-08 01:00:59.118689] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:40.958 [2024-06-08 01:00:59.118740] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.958 [2024-06-08 01:00:59.125193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.958 [2024-06-08 01:00:59.125879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.958 [2024-06-08 01:00:59.125916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.958 [2024-06-08 01:00:59.125928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.958 [2024-06-08 01:00:59.126163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.958 [2024-06-08 01:00:59.126382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.958 [2024-06-08 01:00:59.126391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.958 [2024-06-08 01:00:59.126398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.958 [2024-06-08 01:00:59.129897] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.958 [2024-06-08 01:00:59.138949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.958 [2024-06-08 01:00:59.139660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.958 [2024-06-08 01:00:59.139698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.958 [2024-06-08 01:00:59.139708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.958 [2024-06-08 01:00:59.139944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.958 [2024-06-08 01:00:59.140162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.958 [2024-06-08 01:00:59.140171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.958 [2024-06-08 01:00:59.140179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.958 [2024-06-08 01:00:59.143676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.958 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.958 [2024-06-08 01:00:59.152733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.958 [2024-06-08 01:00:59.153353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.958 [2024-06-08 01:00:59.153373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.958 [2024-06-08 01:00:59.153381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.958 [2024-06-08 01:00:59.153604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.958 [2024-06-08 01:00:59.153821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.958 [2024-06-08 01:00:59.153829] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.958 [2024-06-08 01:00:59.153836] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.958 [2024-06-08 01:00:59.157320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.959 [2024-06-08 01:00:59.166651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.959 [2024-06-08 01:00:59.167254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.959 [2024-06-08 01:00:59.167270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.959 [2024-06-08 01:00:59.167282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.959 [2024-06-08 01:00:59.167503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.959 [2024-06-08 01:00:59.167719] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.959 [2024-06-08 01:00:59.167726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.959 [2024-06-08 01:00:59.167734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.959 [2024-06-08 01:00:59.171214] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.959 [2024-06-08 01:00:59.180456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.959 [2024-06-08 01:00:59.181138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.959 [2024-06-08 01:00:59.181175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.959 [2024-06-08 01:00:59.181185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.959 [2024-06-08 01:00:59.181427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.959 [2024-06-08 01:00:59.181647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.959 [2024-06-08 01:00:59.181655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.959 [2024-06-08 01:00:59.181663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.959 [2024-06-08 01:00:59.185152] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.959 [2024-06-08 01:00:59.194208] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.959 [2024-06-08 01:00:59.194920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.959 [2024-06-08 01:00:59.194957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.959 [2024-06-08 01:00:59.194968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.959 [2024-06-08 01:00:59.195202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.959 [2024-06-08 01:00:59.195429] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.959 [2024-06-08 01:00:59.195438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.959 [2024-06-08 01:00:59.195445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.959 [2024-06-08 01:00:59.198936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.959 [2024-06-08 01:00:59.200671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:40.959 [2024-06-08 01:00:59.207982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.959 [2024-06-08 01:00:59.208663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.959 [2024-06-08 01:00:59.208684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.959 [2024-06-08 01:00:59.208692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.959 [2024-06-08 01:00:59.208909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.959 [2024-06-08 01:00:59.209130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.959 [2024-06-08 01:00:59.209140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.959 [2024-06-08 01:00:59.209147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.959 [2024-06-08 01:00:59.212635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.959 [2024-06-08 01:00:59.221883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.959 [2024-06-08 01:00:59.222674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.959 [2024-06-08 01:00:59.222712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.959 [2024-06-08 01:00:59.222723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.959 [2024-06-08 01:00:59.222960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.959 [2024-06-08 01:00:59.223179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.959 [2024-06-08 01:00:59.223188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.959 [2024-06-08 01:00:59.223196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.959 [2024-06-08 01:00:59.226699] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.959 [2024-06-08 01:00:59.235751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.959 [2024-06-08 01:00:59.236239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.959 [2024-06-08 01:00:59.236260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:40.959 [2024-06-08 01:00:59.236269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:40.959 [2024-06-08 01:00:59.236498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:40.959 [2024-06-08 01:00:59.236716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.959 [2024-06-08 01:00:59.236724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.959 [2024-06-08 01:00:59.236731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.222 [2024-06-08 01:00:59.240216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.222 [2024-06-08 01:00:59.249675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.222 [2024-06-08 01:00:59.250459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.222 [2024-06-08 01:00:59.250497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.222 [2024-06-08 01:00:59.250510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.222 [2024-06-08 01:00:59.250748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.222 [2024-06-08 01:00:59.250969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.222 [2024-06-08 01:00:59.250978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.222 [2024-06-08 01:00:59.250986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.222 [2024-06-08 01:00:59.254303] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:41.222 [2024-06-08 01:00:59.254329] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:41.222 [2024-06-08 01:00:59.254335] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:41.222 [2024-06-08 01:00:59.254340] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:41.222 [2024-06-08 01:00:59.254344] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:41.222 [2024-06-08 01:00:59.254493] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.222 [2024-06-08 01:00:59.254568] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:41.222 [2024-06-08 01:00:59.254945] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.222 [2024-06-08 01:00:59.254945] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:41.222 [2024-06-08 01:00:59.263548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.222 [2024-06-08 01:00:59.264205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.222 [2024-06-08 01:00:59.264224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.222 [2024-06-08 01:00:59.264232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.222 [2024-06-08 01:00:59.264454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.222 [2024-06-08 01:00:59.264671] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.222 [2024-06-08 01:00:59.264679] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.222 [2024-06-08 01:00:59.264686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.222 [2024-06-08 01:00:59.268172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.222 [2024-06-08 01:00:59.277433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.222 [2024-06-08 01:00:59.278237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.222 [2024-06-08 01:00:59.278276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.222 [2024-06-08 01:00:59.278286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.222 [2024-06-08 01:00:59.278532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.222 [2024-06-08 01:00:59.278752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.222 [2024-06-08 01:00:59.278762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.222 [2024-06-08 01:00:59.278770] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.222 [2024-06-08 01:00:59.282260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.222 [2024-06-08 01:00:59.291316] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.222 [2024-06-08 01:00:59.291998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.222 [2024-06-08 01:00:59.292017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.222 [2024-06-08 01:00:59.292025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.222 [2024-06-08 01:00:59.292241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.222 [2024-06-08 01:00:59.292469] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.222 [2024-06-08 01:00:59.292478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.222 [2024-06-08 01:00:59.292485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.222 [2024-06-08 01:00:59.295971] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.222 [2024-06-08 01:00:59.305221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.222 [2024-06-08 01:00:59.305864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.223 [2024-06-08 01:00:59.305880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.223 [2024-06-08 01:00:59.305887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.223 [2024-06-08 01:00:59.306103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.223 [2024-06-08 01:00:59.306318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.223 [2024-06-08 01:00:59.306326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.223 [2024-06-08 01:00:59.306338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.223 [2024-06-08 01:00:59.309882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.223 [2024-06-08 01:00:59.319143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.223 [2024-06-08 01:00:59.319712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.223 [2024-06-08 01:00:59.319750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.223 [2024-06-08 01:00:59.319763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.223 [2024-06-08 01:00:59.320002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.223 [2024-06-08 01:00:59.320221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.223 [2024-06-08 01:00:59.320230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.223 [2024-06-08 01:00:59.320237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.223 [2024-06-08 01:00:59.323734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.223 [2024-06-08 01:00:59.332995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.223 [2024-06-08 01:00:59.333745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.223 [2024-06-08 01:00:59.333782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.223 [2024-06-08 01:00:59.333793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.223 [2024-06-08 01:00:59.334029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.223 [2024-06-08 01:00:59.334248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.223 [2024-06-08 01:00:59.334256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.223 [2024-06-08 01:00:59.334264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.223 [2024-06-08 01:00:59.337767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.223 [2024-06-08 01:00:59.346819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.223 [2024-06-08 01:00:59.347528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.223 [2024-06-08 01:00:59.347565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.223 [2024-06-08 01:00:59.347577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.223 [2024-06-08 01:00:59.347816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.223 [2024-06-08 01:00:59.348034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.223 [2024-06-08 01:00:59.348044] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.223 [2024-06-08 01:00:59.348051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.223 [2024-06-08 01:00:59.351551] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.223 [2024-06-08 01:00:59.360600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.223 [2024-06-08 01:00:59.361337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.223 [2024-06-08 01:00:59.361374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.223 [2024-06-08 01:00:59.361386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.223 [2024-06-08 01:00:59.361633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.223 [2024-06-08 01:00:59.361853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.223 [2024-06-08 01:00:59.361862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.223 [2024-06-08 01:00:59.361869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.223 [2024-06-08 01:00:59.365360] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.223 [2024-06-08 01:00:59.374413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.223 [2024-06-08 01:00:59.374879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.223 [2024-06-08 01:00:59.374897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.223 [2024-06-08 01:00:59.374904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.223 [2024-06-08 01:00:59.375119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.223 [2024-06-08 01:00:59.375335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.223 [2024-06-08 01:00:59.375342] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.223 [2024-06-08 01:00:59.375349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.223 [2024-06-08 01:00:59.378840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.223 [2024-06-08 01:00:59.388297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.223 [2024-06-08 01:00:59.389006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.223 [2024-06-08 01:00:59.389043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.223 [2024-06-08 01:00:59.389058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.223 [2024-06-08 01:00:59.389293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.223 [2024-06-08 01:00:59.389519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.223 [2024-06-08 01:00:59.389529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.223 [2024-06-08 01:00:59.389536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.223 [2024-06-08 01:00:59.393037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.223 [2024-06-08 01:00:59.402091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.223 [2024-06-08 01:00:59.402828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.223 [2024-06-08 01:00:59.402865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.223 [2024-06-08 01:00:59.402876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.223 [2024-06-08 01:00:59.403110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.223 [2024-06-08 01:00:59.403329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.224 [2024-06-08 01:00:59.403338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.224 [2024-06-08 01:00:59.403345] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.224 [2024-06-08 01:00:59.406845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.224 [2024-06-08 01:00:59.415897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.224 [2024-06-08 01:00:59.416651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.224 [2024-06-08 01:00:59.416688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.224 [2024-06-08 01:00:59.416698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.224 [2024-06-08 01:00:59.416933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.224 [2024-06-08 01:00:59.417152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.224 [2024-06-08 01:00:59.417160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.224 [2024-06-08 01:00:59.417168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.224 [2024-06-08 01:00:59.420667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.224 [2024-06-08 01:00:59.429937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.224 [2024-06-08 01:00:59.430682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.224 [2024-06-08 01:00:59.430719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.224 [2024-06-08 01:00:59.430730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.224 [2024-06-08 01:00:59.430965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.224 [2024-06-08 01:00:59.431184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.224 [2024-06-08 01:00:59.431197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.224 [2024-06-08 01:00:59.431204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.224 [2024-06-08 01:00:59.434708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.224 [2024-06-08 01:00:59.443788] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.224 [2024-06-08 01:00:59.444461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.224 [2024-06-08 01:00:59.444480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.224 [2024-06-08 01:00:59.444488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.224 [2024-06-08 01:00:59.444704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.224 [2024-06-08 01:00:59.444920] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.224 [2024-06-08 01:00:59.444927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.224 [2024-06-08 01:00:59.444934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.224 [2024-06-08 01:00:59.448423] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.224 [2024-06-08 01:00:59.457673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.224 [2024-06-08 01:00:59.458327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.224 [2024-06-08 01:00:59.458342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.224 [2024-06-08 01:00:59.458349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.224 [2024-06-08 01:00:59.458570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.224 [2024-06-08 01:00:59.458786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.224 [2024-06-08 01:00:59.458794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.224 [2024-06-08 01:00:59.458801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.224 [2024-06-08 01:00:59.462282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.224 [2024-06-08 01:00:59.471533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.224 [2024-06-08 01:00:59.472217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.224 [2024-06-08 01:00:59.472254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.224 [2024-06-08 01:00:59.472265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.224 [2024-06-08 01:00:59.472508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.224 [2024-06-08 01:00:59.472728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.224 [2024-06-08 01:00:59.472737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.224 [2024-06-08 01:00:59.472745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.224 [2024-06-08 01:00:59.476233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.224 [2024-06-08 01:00:59.485294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.224 [2024-06-08 01:00:59.486003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.224 [2024-06-08 01:00:59.486041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.224 [2024-06-08 01:00:59.486051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.224 [2024-06-08 01:00:59.486286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.224 [2024-06-08 01:00:59.486513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.224 [2024-06-08 01:00:59.486522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.224 [2024-06-08 01:00:59.486530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.224 [2024-06-08 01:00:59.490018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.224 [2024-06-08 01:00:59.499079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.224 [2024-06-08 01:00:59.499777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.224 [2024-06-08 01:00:59.499814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.224 [2024-06-08 01:00:59.499825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.224 [2024-06-08 01:00:59.500059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.224 [2024-06-08 01:00:59.500278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.224 [2024-06-08 01:00:59.500286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.224 [2024-06-08 01:00:59.500293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.224 [2024-06-08 01:00:59.503791] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.487 [2024-06-08 01:00:59.512844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.487 [2024-06-08 01:00:59.513618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.487 [2024-06-08 01:00:59.513656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.487 [2024-06-08 01:00:59.513666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.487 [2024-06-08 01:00:59.513901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.487 [2024-06-08 01:00:59.514120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.487 [2024-06-08 01:00:59.514129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.487 [2024-06-08 01:00:59.514137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.487 [2024-06-08 01:00:59.517637] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.487 [2024-06-08 01:00:59.526689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.487 [2024-06-08 01:00:59.527453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.487 [2024-06-08 01:00:59.527490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.487 [2024-06-08 01:00:59.527503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.487 [2024-06-08 01:00:59.527744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.487 [2024-06-08 01:00:59.527964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.487 [2024-06-08 01:00:59.527973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.487 [2024-06-08 01:00:59.527980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.487 [2024-06-08 01:00:59.531477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.487 [2024-06-08 01:00:59.540533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.487 [2024-06-08 01:00:59.541148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.487 [2024-06-08 01:00:59.541166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.487 [2024-06-08 01:00:59.541173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.487 [2024-06-08 01:00:59.541389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.487 [2024-06-08 01:00:59.541612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.487 [2024-06-08 01:00:59.541620] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.487 [2024-06-08 01:00:59.541627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.487 [2024-06-08 01:00:59.545113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.487 [2024-06-08 01:00:59.554360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.487 [2024-06-08 01:00:59.555065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.487 [2024-06-08 01:00:59.555102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.487 [2024-06-08 01:00:59.555113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.487 [2024-06-08 01:00:59.555347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.487 [2024-06-08 01:00:59.555576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.487 [2024-06-08 01:00:59.555585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.487 [2024-06-08 01:00:59.555593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.487 [2024-06-08 01:00:59.559083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.487 [2024-06-08 01:00:59.568134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.487 [2024-06-08 01:00:59.568717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.487 [2024-06-08 01:00:59.568754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.487 [2024-06-08 01:00:59.568765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.487 [2024-06-08 01:00:59.569000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.487 [2024-06-08 01:00:59.569219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.487 [2024-06-08 01:00:59.569227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.487 [2024-06-08 01:00:59.569239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.487 [2024-06-08 01:00:59.572737] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.487 [2024-06-08 01:00:59.581995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.487 [2024-06-08 01:00:59.582761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.487 [2024-06-08 01:00:59.582798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.487 [2024-06-08 01:00:59.582809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.487 [2024-06-08 01:00:59.583044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.487 [2024-06-08 01:00:59.583262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.487 [2024-06-08 01:00:59.583272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.487 [2024-06-08 01:00:59.583279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.487 [2024-06-08 01:00:59.586773] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.487 [2024-06-08 01:00:59.595838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.487 [2024-06-08 01:00:59.596610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.487 [2024-06-08 01:00:59.596647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.487 [2024-06-08 01:00:59.596660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.487 [2024-06-08 01:00:59.596898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.487 [2024-06-08 01:00:59.597117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.487 [2024-06-08 01:00:59.597126] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.487 [2024-06-08 01:00:59.597133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.487 [2024-06-08 01:00:59.600633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.487 [2024-06-08 01:00:59.609682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.487 [2024-06-08 01:00:59.610347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.487 [2024-06-08 01:00:59.610365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.487 [2024-06-08 01:00:59.610372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.487 [2024-06-08 01:00:59.610594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.487 [2024-06-08 01:00:59.610810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.487 [2024-06-08 01:00:59.610817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.487 [2024-06-08 01:00:59.610824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.487 [2024-06-08 01:00:59.614306] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.487 [2024-06-08 01:00:59.623557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.487 [2024-06-08 01:00:59.624290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.487 [2024-06-08 01:00:59.624327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.487 [2024-06-08 01:00:59.624338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.487 [2024-06-08 01:00:59.624582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.487 [2024-06-08 01:00:59.624802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.487 [2024-06-08 01:00:59.624811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.487 [2024-06-08 01:00:59.624819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.628309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.488 [2024-06-08 01:00:59.637361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.488 [2024-06-08 01:00:59.638125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.488 [2024-06-08 01:00:59.638163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.488 [2024-06-08 01:00:59.638174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.488 [2024-06-08 01:00:59.638416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.488 [2024-06-08 01:00:59.638636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.488 [2024-06-08 01:00:59.638645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.488 [2024-06-08 01:00:59.638652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.642147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.488 [2024-06-08 01:00:59.651193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.488 [2024-06-08 01:00:59.651716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.488 [2024-06-08 01:00:59.651754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.488 [2024-06-08 01:00:59.651764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.488 [2024-06-08 01:00:59.651999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.488 [2024-06-08 01:00:59.652218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.488 [2024-06-08 01:00:59.652227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.488 [2024-06-08 01:00:59.652234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.655732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.488 [2024-06-08 01:00:59.664988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.488 [2024-06-08 01:00:59.665714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.488 [2024-06-08 01:00:59.665750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.488 [2024-06-08 01:00:59.665761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.488 [2024-06-08 01:00:59.665996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.488 [2024-06-08 01:00:59.666219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.488 [2024-06-08 01:00:59.666229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.488 [2024-06-08 01:00:59.666237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.669733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.488 [2024-06-08 01:00:59.678785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.488 [2024-06-08 01:00:59.679350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.488 [2024-06-08 01:00:59.679388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.488 [2024-06-08 01:00:59.679400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.488 [2024-06-08 01:00:59.679647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.488 [2024-06-08 01:00:59.679866] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.488 [2024-06-08 01:00:59.679875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.488 [2024-06-08 01:00:59.679883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.683372] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.488 [2024-06-08 01:00:59.692643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.488 [2024-06-08 01:00:59.693382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.488 [2024-06-08 01:00:59.693426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.488 [2024-06-08 01:00:59.693437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.488 [2024-06-08 01:00:59.693672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.488 [2024-06-08 01:00:59.693891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.488 [2024-06-08 01:00:59.693900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.488 [2024-06-08 01:00:59.693908] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.697397] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.488 [2024-06-08 01:00:59.706453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.488 [2024-06-08 01:00:59.707157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.488 [2024-06-08 01:00:59.707194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.488 [2024-06-08 01:00:59.707207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.488 [2024-06-08 01:00:59.707452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.488 [2024-06-08 01:00:59.707672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.488 [2024-06-08 01:00:59.707681] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.488 [2024-06-08 01:00:59.707689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.711184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.488 [2024-06-08 01:00:59.720235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.488 [2024-06-08 01:00:59.720873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.488 [2024-06-08 01:00:59.720891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.488 [2024-06-08 01:00:59.720899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.488 [2024-06-08 01:00:59.721114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.488 [2024-06-08 01:00:59.721330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.488 [2024-06-08 01:00:59.721337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.488 [2024-06-08 01:00:59.721344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.724835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.488 [2024-06-08 01:00:59.734086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.488 [2024-06-08 01:00:59.734797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.488 [2024-06-08 01:00:59.734835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.488 [2024-06-08 01:00:59.734846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.488 [2024-06-08 01:00:59.735081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.488 [2024-06-08 01:00:59.735299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.488 [2024-06-08 01:00:59.735308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.488 [2024-06-08 01:00:59.735315] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.738811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.488 [2024-06-08 01:00:59.747865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.488 [2024-06-08 01:00:59.748666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.488 [2024-06-08 01:00:59.748703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.488 [2024-06-08 01:00:59.748714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.488 [2024-06-08 01:00:59.748949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.488 [2024-06-08 01:00:59.749168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.488 [2024-06-08 01:00:59.749176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.488 [2024-06-08 01:00:59.749183] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.752680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.488 [2024-06-08 01:00:59.761733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.488 [2024-06-08 01:00:59.762498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.488 [2024-06-08 01:00:59.762540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.488 [2024-06-08 01:00:59.762552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.488 [2024-06-08 01:00:59.762791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.488 [2024-06-08 01:00:59.763010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.488 [2024-06-08 01:00:59.763019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.488 [2024-06-08 01:00:59.763027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.488 [2024-06-08 01:00:59.766525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.750 [2024-06-08 01:00:59.775576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.750 [2024-06-08 01:00:59.776298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.750 [2024-06-08 01:00:59.776334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.750 [2024-06-08 01:00:59.776345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.750 [2024-06-08 01:00:59.776589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.750 [2024-06-08 01:00:59.776809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.750 [2024-06-08 01:00:59.776818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.750 [2024-06-08 01:00:59.776826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.750 [2024-06-08 01:00:59.780314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.750 [2024-06-08 01:00:59.789364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.750 [2024-06-08 01:00:59.790109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.750 [2024-06-08 01:00:59.790146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.750 [2024-06-08 01:00:59.790157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.750 [2024-06-08 01:00:59.790391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.750 [2024-06-08 01:00:59.790618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.750 [2024-06-08 01:00:59.790627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.750 [2024-06-08 01:00:59.790634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.750 [2024-06-08 01:00:59.794137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.750 [2024-06-08 01:00:59.803189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.750 [2024-06-08 01:00:59.803915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.750 [2024-06-08 01:00:59.803952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.750 [2024-06-08 01:00:59.803962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.750 [2024-06-08 01:00:59.804197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.750 [2024-06-08 01:00:59.804428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.750 [2024-06-08 01:00:59.804438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.750 [2024-06-08 01:00:59.804445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.750 [2024-06-08 01:00:59.807936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.750 [2024-06-08 01:00:59.816990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.750 [2024-06-08 01:00:59.817757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.750 [2024-06-08 01:00:59.817795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.750 [2024-06-08 01:00:59.817806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.750 [2024-06-08 01:00:59.818040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.750 [2024-06-08 01:00:59.818260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.750 [2024-06-08 01:00:59.818268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.750 [2024-06-08 01:00:59.818276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.750 [2024-06-08 01:00:59.821775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.750 [2024-06-08 01:00:59.830926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.750 [2024-06-08 01:00:59.831634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.750 [2024-06-08 01:00:59.831672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.750 [2024-06-08 01:00:59.831682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.750 [2024-06-08 01:00:59.831917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.750 [2024-06-08 01:00:59.832136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.750 [2024-06-08 01:00:59.832145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.750 [2024-06-08 01:00:59.832152] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.750 [2024-06-08 01:00:59.835651] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.750 [2024-06-08 01:00:59.844705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.750 [2024-06-08 01:00:59.845411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.750 [2024-06-08 01:00:59.845448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.750 [2024-06-08 01:00:59.845460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.750 [2024-06-08 01:00:59.845696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.750 [2024-06-08 01:00:59.845916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.750 [2024-06-08 01:00:59.845924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.750 [2024-06-08 01:00:59.845932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.750 [2024-06-08 01:00:59.849430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.750 [2024-06-08 01:00:59.858485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.750 [2024-06-08 01:00:59.859111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.750 [2024-06-08 01:00:59.859128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.750 [2024-06-08 01:00:59.859136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.750 [2024-06-08 01:00:59.859352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.750 [2024-06-08 01:00:59.859574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.750 [2024-06-08 01:00:59.859583] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.750 [2024-06-08 01:00:59.859589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.750 [2024-06-08 01:00:59.863075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.750 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:41.750 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:35:41.750 01:00:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.751 [2024-06-08 01:00:59.872325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.751 [2024-06-08 01:00:59.872940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.751 [2024-06-08 01:00:59.872955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.751 [2024-06-08 01:00:59.872962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.751 [2024-06-08 01:00:59.873178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.751 [2024-06-08 01:00:59.873393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.751 [2024-06-08 01:00:59.873400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.751 [2024-06-08 01:00:59.873413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.751 [2024-06-08 01:00:59.876896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.751 [2024-06-08 01:00:59.886142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.751 [2024-06-08 01:00:59.886760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.751 [2024-06-08 01:00:59.886777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.751 [2024-06-08 01:00:59.886784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.751 [2024-06-08 01:00:59.887000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.751 [2024-06-08 01:00:59.887218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.751 [2024-06-08 01:00:59.887226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.751 [2024-06-08 01:00:59.887233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.751 [2024-06-08 01:00:59.890811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.751 [2024-06-08 01:00:59.899878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.751 [2024-06-08 01:00:59.900651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.751 [2024-06-08 01:00:59.900689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.751 [2024-06-08 01:00:59.900699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.751 [2024-06-08 01:00:59.900934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.751 [2024-06-08 01:00:59.901153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.751 [2024-06-08 01:00:59.901161] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.751 [2024-06-08 01:00:59.901169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.751 [2024-06-08 01:00:59.904670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.751 [2024-06-08 01:00:59.909128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.751 [2024-06-08 01:00:59.913721] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.751 [2024-06-08 01:00:59.914478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.751 [2024-06-08 01:00:59.914515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.751 [2024-06-08 01:00:59.914527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:41.751 [2024-06-08 01:00:59.914761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.751 [2024-06-08 01:00:59.914982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.751 [2024-06-08 01:00:59.914992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.751 [2024-06-08 01:00:59.914999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.751 [2024-06-08 01:00:59.918500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.751 [2024-06-08 01:00:59.927548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.751 [2024-06-08 01:00:59.928256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.751 [2024-06-08 01:00:59.928293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.751 [2024-06-08 01:00:59.928304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.751 [2024-06-08 01:00:59.928546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.751 [2024-06-08 01:00:59.928767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.751 [2024-06-08 01:00:59.928780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.751 [2024-06-08 01:00:59.928787] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.751 [2024-06-08 01:00:59.932275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.751 [2024-06-08 01:00:59.941342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.751 [2024-06-08 01:00:59.942024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.751 [2024-06-08 01:00:59.942043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.751 [2024-06-08 01:00:59.942051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.751 [2024-06-08 01:00:59.942267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.751 [2024-06-08 01:00:59.942488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.751 [2024-06-08 01:00:59.942497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.751 [2024-06-08 01:00:59.942503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.751 Malloc0 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.751 [2024-06-08 01:00:59.945986] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.751 [2024-06-08 01:00:59.955254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.751 [2024-06-08 01:00:59.955770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.751 [2024-06-08 01:00:59.955807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.751 [2024-06-08 01:00:59.955818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.751 [2024-06-08 01:00:59.956053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.751 [2024-06-08 01:00:59.956272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.751 [2024-06-08 01:00:59.956281] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.751 [2024-06-08 01:00:59.956288] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.751 [2024-06-08 01:00:59.959788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.751 [2024-06-08 01:00:59.969047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.751 [2024-06-08 01:00:59.969588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.751 [2024-06-08 01:00:59.969625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13efdc0 with addr=10.0.0.2, port=4420 00:35:41.751 [2024-06-08 01:00:59.969638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efdc0 is same with the state(5) to be set 00:35:41.751 [2024-06-08 01:00:59.969876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13efdc0 (9): Bad file descriptor 00:35:41.751 [2024-06-08 01:00:59.970095] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.751 [2024-06-08 01:00:59.970104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.751 [2024-06-08 01:00:59.970112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.751 [2024-06-08 01:00:59.973611] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.751 [2024-06-08 01:00:59.975113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.751 01:00:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.752 01:00:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 670343 00:35:41.752 [2024-06-08 01:00:59.982867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.752 [2024-06-08 01:01:00.024971] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:51.749 00:35:51.749 Latency(us) 00:35:51.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.749 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:51.749 Verification LBA range: start 0x0 length 0x4000 00:35:51.749 Nvme1n1 : 15.00 8409.08 32.85 9794.71 0.00 7006.80 785.07 16602.45 00:35:51.749 =================================================================================================================== 00:35:51.749 Total : 8409.08 32.85 9794.71 0.00 7006.80 785.07 16602.45 00:35:51.749 01:01:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:51.749 01:01:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:51.749 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:51.749 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.749 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:51.750 rmmod nvme_tcp 00:35:51.750 rmmod nvme_fabrics 00:35:51.750 rmmod nvme_keyring 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 671358 ']' 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 671358 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 671358 ']' 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 671358 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 671358 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 671358' 00:35:51.750 killing process with pid 671358 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 671358 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 671358 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:51.750 01:01:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.135 01:01:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:53.135 00:35:53.135 real 0m27.374s 00:35:53.135 user 1m3.335s 00:35:53.135 sys 0m6.611s 00:35:53.135 01:01:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:53.135 01:01:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:53.135 ************************************ 00:35:53.135 END TEST nvmf_bdevperf 00:35:53.135 ************************************ 00:35:53.135 01:01:11 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:53.135 01:01:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:53.135 01:01:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:53.135 01:01:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.135 ************************************ 00:35:53.135 START TEST nvmf_target_disconnect 00:35:53.135 ************************************ 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:53.135 * Looking for test storage... 00:35:53.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.135 01:01:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:35:53.136 01:01:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:59.759 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:59.759 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:59.759 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:59.759 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:59.759 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:59.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:35:59.760 00:35:59.760 --- 10.0.0.2 ping statistics --- 00:35:59.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.760 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:59.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.450 ms 00:35:59.760 00:35:59.760 --- 10.0.0.1 ping statistics --- 00:35:59.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.760 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:59.760 ************************************ 00:35:59.760 START TEST nvmf_target_disconnect_tc1 00:35:59.760 ************************************ 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:59.760 01:01:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:59.760 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.760 [2024-06-08 01:01:18.031672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.760 [2024-06-08 01:01:18.031746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11381d0 with addr=10.0.0.2, port=4420 00:35:59.760 [2024-06-08 01:01:18.031778] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:59.760 [2024-06-08 01:01:18.031794] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:59.760 [2024-06-08 01:01:18.031801] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:35:59.760 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:59.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:59.760 Initializing NVMe Controllers 00:35:59.760 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:35:59.760 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:59.760 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:59.760 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:59.760 00:35:59.760 real 0m0.109s 00:35:59.760 user 0m0.040s 00:35:59.760 sys 0m0.069s 00:35:59.760 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:59.760 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:59.760 ************************************ 00:35:59.760 END TEST nvmf_target_disconnect_tc1 00:35:59.760 ************************************ 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:00.022 ************************************ 00:36:00.022 START TEST nvmf_target_disconnect_tc2 00:36:00.022 ************************************ 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=677401 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 677401 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 677401 ']' 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:00.022 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.022 [2024-06-08 01:01:18.178305] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:36:00.022 [2024-06-08 01:01:18.178362] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:00.022 EAL: No free 2048 kB hugepages reported on node 1 00:36:00.022 [2024-06-08 01:01:18.263923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:00.283 [2024-06-08 01:01:18.358610] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:00.283 [2024-06-08 01:01:18.358664] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:00.283 [2024-06-08 01:01:18.358673] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:00.283 [2024-06-08 01:01:18.358681] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:00.283 [2024-06-08 01:01:18.358691] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:00.283 [2024-06-08 01:01:18.358852] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:36:00.283 [2024-06-08 01:01:18.358997] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:36:00.283 [2024-06-08 01:01:18.359159] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:36:00.283 [2024-06-08 01:01:18.359160] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:36:00.855 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:00.855 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:36:00.855 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:00.855 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:00.855 01:01:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.855 Malloc0 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.855 [2024-06-08 01:01:19.031628] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.855 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.855 [2024-06-08 01:01:19.071891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:00.856 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.856 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:00.856 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.856 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.856 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.856 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=677535 00:36:00.856 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:00.856 01:01:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:01.116 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.035 01:01:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 677401 00:36:03.035 01:01:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Read completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 Write completed with error (sct=0, sc=8) 00:36:03.035 starting I/O failed 00:36:03.035 [2024-06-08 01:01:21.104430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.035 [2024-06-08 01:01:21.104964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.105001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.105328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.105341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.105917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.105961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.106370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.106384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.106874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.106912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.107314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.107329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.107743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.107781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.108186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.108200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.108712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.108751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.109154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.109169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.109688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.109727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.110036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.110050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.110411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.110423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.110750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.110762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.111144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.111156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.111692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.111731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.112057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.112072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.112447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.112459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.112857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.112868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.113266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.113278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.113672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.113684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.114094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.114107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.114674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.114714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.115124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.115138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.035 [2024-06-08 01:01:21.115494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.035 [2024-06-08 01:01:21.115506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.035 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.115908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.115920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.116315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.116327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.116726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.116737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.117138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.117149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.117562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.117578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.117937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.117947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.118355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.118366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.118802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.118814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.119190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.119200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.119701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.119739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.120078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.120092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.120494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.120506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.120879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.120890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.121296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.121306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.121718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.121728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.122051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.122062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.122479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.122490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.122816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.122828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.123165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.123176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.123556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.123567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.123941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.123951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.124320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.124331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.124718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.124729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.125041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.125053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.125370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.125380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.125700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.125712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.126098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.126109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.126328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.126338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.126666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.126678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.127066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.127078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.127473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.127484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.127862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.127874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.128282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.128292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.128697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.128708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.129084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.129096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.129452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.129463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.129822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.129832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.130235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.036 [2024-06-08 01:01:21.130245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.036 qpair failed and we were unable to recover it. 00:36:03.036 [2024-06-08 01:01:21.130541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.130551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.130837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.130849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.131249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.131260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.131641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.131653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.132056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.132066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.132470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.132481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.132865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.132876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.133285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.133296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.133643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.133655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.134023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.134034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.134397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.134422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.134690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.134702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.135050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.135060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.135470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.135481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.135798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.135809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.136205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.136216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.136507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.136519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.136925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.136935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.137341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.137351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.137755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.137765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.138147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.138158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.138563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.138574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.138943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.138954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.139342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.139353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.139701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.139713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.140093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.140104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.140507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.140517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.140869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.140880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.141284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.141295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.141659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.141670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.141885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.141898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.142311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.142321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.142656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.142668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.143032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.143043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.143448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.143460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.143797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.143808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.144194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.144205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.144610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.144620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.144979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.037 [2024-06-08 01:01:21.144989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.037 qpair failed and we were unable to recover it. 00:36:03.037 [2024-06-08 01:01:21.145394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.145411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.145772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.145783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.146144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.146155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.146561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.146572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.146978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.146989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.147390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.147405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.147742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.147754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.148170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.148181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.148590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.148628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.148934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.148950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.149351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.149362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.149711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.149724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.150125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.150137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.151229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.151252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.151466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.151480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.151848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.151860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.152222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.152233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.152614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.152624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.152993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.153003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.153361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.153372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.153711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.153723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.154075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.154087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.154405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.154420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.154783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.154794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.155082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.155094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.155473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.155485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.155914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.155925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.156265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.156276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.160424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.160450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.160851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.160867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.161287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.161303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.161723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.161736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.162127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.162143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.162550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.162563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.162944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.162956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.163314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.163325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.163717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.163728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.164126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.164137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.038 [2024-06-08 01:01:21.164481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.038 [2024-06-08 01:01:21.164492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.038 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.164743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.164754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.165062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.165074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.165481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.165492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.165898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.165908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.166284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.166294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.166703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.166714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.167124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.167135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.167543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.167554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.167926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.167937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.168340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.168351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.168787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.168801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.169181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.169193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.169601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.169612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.170017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.170028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.170409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.170420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.170805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.170816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.171194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.171205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.171676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.171714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.172084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.172098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.172599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.172638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.173052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.173065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.173476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.173488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.173708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.173722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.174083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.174093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.174506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.174517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.174960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.174970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.175354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.175365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.175753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.175764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.176014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.176025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.176415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.039 [2024-06-08 01:01:21.176426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.039 qpair failed and we were unable to recover it. 00:36:03.039 [2024-06-08 01:01:21.176820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.176831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.177016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.177027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.177371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.177382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.177773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.177786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.178075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.178087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.178479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.178490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.178884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.178895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.179284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.179295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.179648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.179659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.180029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.180040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.180430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.180442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.180846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.180857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.181263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.181274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.181660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.181671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.182053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.182064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.182482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.182494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.182874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.182884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.183264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.183275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.183666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.183677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.184058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.184070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.184468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.184480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.184802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.184813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.185218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.185230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.185618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.185629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.186037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.186048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.186339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.186350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.186732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.186743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.187146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.187157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.187537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.187548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.187924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.187934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.188339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.188349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.188734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.188744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.189131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.189142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.189555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.189566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.189944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.189955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.190341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.190352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.190730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.190742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.191124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.191136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.040 qpair failed and we were unable to recover it. 00:36:03.040 [2024-06-08 01:01:21.191443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.040 [2024-06-08 01:01:21.191454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.191784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.191795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.192078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.192088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.192477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.192488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.192875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.192885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.193269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.193279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.193677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.193687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.194099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.194111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.194554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.194566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.194971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.194982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.195388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.195404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.195821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.195832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.196213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.196224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.196700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.196738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.197130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.197143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.197634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.197672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.198063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.198076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.198468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.198481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.198892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.198903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.199317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.199327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.199721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.199733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.200045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.200055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.200436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.200447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.200827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.200839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.201226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.201236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.201621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.201632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.202016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.202026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.202429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.202440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.202824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.202835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.203053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.203066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.203336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.203347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.203760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.203771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.204149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.204159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.204548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.204560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.204928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.204940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.205221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.205233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.205532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.205544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.205956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.205969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.206338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.041 [2024-06-08 01:01:21.206348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.041 qpair failed and we were unable to recover it. 00:36:03.041 [2024-06-08 01:01:21.206739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.206750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.207160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.207171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.207579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.207590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.207974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.207986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.208390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.208405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.208780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.208791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.209164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.209176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.209434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.209455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.209689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.209700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.210084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.210095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.210487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.210498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.210953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.210964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.211351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.211362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.211745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.211755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.212139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.212151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.212425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.212437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.212811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.212821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.213207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.213218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.213600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.213611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.213977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.213988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.214368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.214379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.214785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.214796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.215200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.215210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.215595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.215605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.215979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.215990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.216345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.216359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.216658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.216670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.216916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.216929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.217299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.217310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.217677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.217687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.218086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.218097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.218496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.218507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.218795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.218805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.219160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.219170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.219549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.219560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.219954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.219964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.220350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.220360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.220703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.220715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.042 [2024-06-08 01:01:21.221095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.042 [2024-06-08 01:01:21.221106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.042 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.221407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.221419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.221781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.221792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.222174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.222185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.222687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.222726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.223010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.223023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.223387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.223398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.223783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.223794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.224232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.224242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.224640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.224678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.225069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.225083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.225491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.225503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.225893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.225905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.226282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.226293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.226690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.226701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.226795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.226808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.227152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.227164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.227452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.227463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.227855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.227866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.228236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.228246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.228647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.228657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.229066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.229077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.229463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.229474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.229862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.229873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.230274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.230286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.230747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.230758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.231153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.231163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.231559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.231570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.231955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.231967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.232302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.232314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.232583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.232594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.232975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.232987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.233369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.233380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.233723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.233734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.234121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.234133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.234525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.234536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.234957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.234968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.235361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.235372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.235759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.235770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.043 qpair failed and we were unable to recover it. 00:36:03.043 [2024-06-08 01:01:21.236144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.043 [2024-06-08 01:01:21.236155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.236539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.236550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.236965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.236977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.237351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.237362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.237749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.237760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.238168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.238180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.238585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.238595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.238979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.238990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.239117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.239127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.239510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.239521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.239896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.239906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.240283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.240293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.240718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.240728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.240997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.241009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.241300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.241312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.241685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.241696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.242158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.242172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.242555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.242566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.242922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.242933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.243318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.243328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.243718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.243729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.244131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.244142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.244543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.244553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.244920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.244932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.245208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.245219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.245504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.245515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.245875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.245886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.246295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.246306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.246695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.246706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.247013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.247024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.247412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.247423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.247805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.247816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.248203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.044 [2024-06-08 01:01:21.248214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.044 qpair failed and we were unable to recover it. 00:36:03.044 [2024-06-08 01:01:21.248623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.248634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.249020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.249032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.249446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.249457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.249844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.249855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.250238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.250250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.250638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.250650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.251061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.251072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.251471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.251482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.251860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.251871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.252267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.252278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.252678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.252691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.253069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.253079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.253490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.253500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.253888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.253900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.254282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.254294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.254647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.254659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.255085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.255096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.255480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.255491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.255870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.255881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.256267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.256278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.256674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.256685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.257126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.257137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.257511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.257522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.257894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.257906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.258311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.258323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.258715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.258726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.259107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.259118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.259525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.259537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.259936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.259947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.260329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.260339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.260714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.260725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.261111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.261122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.261508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.261519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.261888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.261900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.262193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.262204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.262592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.262603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.263008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.263019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.263419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.263430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.045 [2024-06-08 01:01:21.263830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.045 [2024-06-08 01:01:21.263841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.045 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.264245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.264256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.264640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.264678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.265072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.265085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.265561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.265598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.266045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.266059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.266516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.266528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.266903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.266915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.267129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.267140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.267534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.267545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.267915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.267925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.268308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.268319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.268719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.268730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.269134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.269146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.269528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.269539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.269912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.269923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.270296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.270307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.270698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.270709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.271090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.271101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.271504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.271515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.271907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.271917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.272299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.272310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.272584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.272594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.272975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.272985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.273376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.273386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.273828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.273838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.274217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.274228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.274492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.274506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.274890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.274901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.275305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.275316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.275716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.275726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.275979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.275990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.276362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.276373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.276748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.276759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.277172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.277182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.277577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.277588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.277972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.277982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.278384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.278395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.278699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.278711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.046 [2024-06-08 01:01:21.279112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.046 [2024-06-08 01:01:21.279123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.046 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.279497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.279511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.279905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.279915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.280296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.280306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.280596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.280607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.280994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.281005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.281387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.281398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.281798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.281809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.282192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.282203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.282446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.282457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.282836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.282846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.283229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.283240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.283691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.283703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.284072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.284083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.284472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.284484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.284888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.284898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.285303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.285314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.285698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.285709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.286110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.286121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.286534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.286546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.286905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.286917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.287306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.287318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.287703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.287714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.288130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.288141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.288531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.288542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.288964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.288975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.289351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.289363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.289753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.289763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.290074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.290088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.290471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.290481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.290883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.290894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.291298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.291309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.291527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.291539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.291939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.291949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.292364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.292375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.292584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.292595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.292989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.293000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.293406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.293419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.047 [2024-06-08 01:01:21.293796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.047 [2024-06-08 01:01:21.293807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.047 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.294184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.294195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.294587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.294598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.294980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.294991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.295385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.295395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.295797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.295808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.296191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.296202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.296567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.296606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.297016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.297030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.297417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.297430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.297779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.297790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.298196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.298206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.298635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.298647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.299020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.299030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.299446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.299457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.299659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.299673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.300055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.300066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.300471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.300486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.300872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.300882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.301263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.301274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.301700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.301712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.301923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.301934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.302334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.302345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.302733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.302744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.303038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.303050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.303441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.303453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.303836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.303847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.304226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.304237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.304618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.304629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.305045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.305056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.305439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.305451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.305852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.305863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.306270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.048 [2024-06-08 01:01:21.306281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.048 qpair failed and we were unable to recover it. 00:36:03.048 [2024-06-08 01:01:21.306661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.306673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.049 [2024-06-08 01:01:21.306951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.306963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.049 [2024-06-08 01:01:21.307334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.307345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.049 [2024-06-08 01:01:21.307666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.307677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.049 [2024-06-08 01:01:21.308045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.308056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.049 [2024-06-08 01:01:21.308462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.308473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.049 [2024-06-08 01:01:21.308844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.308856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.049 [2024-06-08 01:01:21.309222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.309233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.049 [2024-06-08 01:01:21.309548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.309560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.049 [2024-06-08 01:01:21.309822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.309832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.049 [2024-06-08 01:01:21.310209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.049 [2024-06-08 01:01:21.310219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.049 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.310599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.310612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.310998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.311008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.311392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.311408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.311793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.311804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.312187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.312198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.312584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.312595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.313006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.313017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.313405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.313418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.313822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.313833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.314234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.314245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.314719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.314757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.315148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.315161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.315376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.315387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.320 qpair failed and we were unable to recover it. 00:36:03.320 [2024-06-08 01:01:21.315782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.320 [2024-06-08 01:01:21.315794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.316184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.316195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.316679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.316718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.317099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.317112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.317590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.317628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.318031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.318043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.318334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.318345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.318732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.318743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.319144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.319154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.319539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.319550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.319996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.320007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.320417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.320428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.320739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.320749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.321042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.321052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.321470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.321481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.321880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.321891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.322278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.322288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.322673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.322684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.323068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.323078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.323460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.323471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.323907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.323918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.324296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.324307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.324689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.324700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.325073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.325083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.325489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.325501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.325877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.325888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.326272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.326284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.326675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.326686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.327095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.327108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.327495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.327506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.327892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.327902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.328301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.328311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.328693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.328704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.321 [2024-06-08 01:01:21.328921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.321 [2024-06-08 01:01:21.328935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.321 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.329320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.329331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.329722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.329734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.330135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.330146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.330540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.330552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.330928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.330939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.331315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.331325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.331700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.331711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.332098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.332108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.332496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.332506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.332920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.332931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.333370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.333381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.333769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.333781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.334158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.334168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.334578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.334588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.335001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.335012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.335394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.335411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.335807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.335818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.336201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.336212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.336577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.336615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.337026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.337039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.337432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.337443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.337858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.337875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.338252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.338264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.338513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.338524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.338912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.338924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.339300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.339312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.339696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.339708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.340101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.340112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.340512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.340523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.340933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.340944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.341257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.341269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.341574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.341586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.341971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.341983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.342372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.342384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.342792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.322 [2024-06-08 01:01:21.342804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.322 qpair failed and we were unable to recover it. 00:36:03.322 [2024-06-08 01:01:21.343188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.343200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.343679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.343717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.344110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.344123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.344524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.344536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.344923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.344934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.345330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.345341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.345730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.345741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.346147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.346159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.346545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.346557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.346959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.346972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.347376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.347387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.347640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.347652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.348035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.348046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.348469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.348480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.348869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.348881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.349193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.349205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.349589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.349600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.349795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.349807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.350197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.350207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.350582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.350593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.350976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.350987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.351372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.351382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.351763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.351774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.352190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.352200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.352586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.352597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.352991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.353002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.353392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.353412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.353805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.353817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.354031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.354042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.354438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.354449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.354866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.354877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.355211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.323 [2024-06-08 01:01:21.355222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.323 qpair failed and we were unable to recover it. 00:36:03.323 [2024-06-08 01:01:21.355504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.355514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.355971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.355982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.356366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.356377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.356591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.356601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.356992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.357003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.357398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.357416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.357781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.357791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.358165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.358177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.358563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.358574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.358963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.358974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.359363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.359374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.359785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.359796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.360184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.360195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.360694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.360732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.361136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.361149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.361564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.361603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.362010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.362024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.362440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.362453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.362859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.362870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.363277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.363287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.363693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.363704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.364084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.364094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.364472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.364488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.364786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.364797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.365179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.365190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.365474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.365486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.365890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.365902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.366307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.366318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.366599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.366610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.366831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.366845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.367229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.367240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.367611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.367622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.367831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.367842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.368237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.368248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.368531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.368542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.368963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.368973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.369363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.369375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.324 qpair failed and we were unable to recover it. 00:36:03.324 [2024-06-08 01:01:21.369633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.324 [2024-06-08 01:01:21.369644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.370039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.370050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.370463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.370475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.370840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.370852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.371238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.371250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.371636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.371647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.372053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.372064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.372528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.372539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.372934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.372944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.373339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.373349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.373736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.373747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.374124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.374135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.374525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.374539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.374968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.374978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.375391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.375413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.375800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.375811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.376093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.376104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.376487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.376498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.376799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.376809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.377194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.377205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.377590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.377601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.377909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.377919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.378329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.378341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.378725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.378735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.379111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.379122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.379513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.379523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.379964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.379975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.380269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.380280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.380633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.325 [2024-06-08 01:01:21.380644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.325 qpair failed and we were unable to recover it. 00:36:03.325 [2024-06-08 01:01:21.381021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.381032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.381325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.381336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.381715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.381726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.381974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.381985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.382371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.382381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.382786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.382798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.383177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.383187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.383571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.383582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.383966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.383977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.384251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.384271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.384658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.384671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.384944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.384955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.385205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.385216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.385466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.385477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.385858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.385868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.386255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.386265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.386646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.386658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.387072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.387083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.387462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.387473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.387862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.387873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.388255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.388266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.388670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.388681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.389105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.389116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.389364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.389375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.389762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.389773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.390194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.390206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.390649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.390687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.391081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.391095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.391477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.391489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.391731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.391742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.391975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.391986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.392372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.392384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.392761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.392772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.393180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.393191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.393591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.393602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.326 [2024-06-08 01:01:21.393990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.326 [2024-06-08 01:01:21.394002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.326 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.394386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.394397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.394786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.394797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.395185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.395196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.395583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.395594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.396055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.396067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.396608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.396646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.397038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.397051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.397438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.397449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.397863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.397874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.398248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.398260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.398645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.398656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.399049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.399060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.399466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.399477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.399826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.399838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.400223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.400233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.400623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.400635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.401002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.401012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.401385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.401396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.401784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.401795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.402007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.402017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.402398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.402416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.402792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.402803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.403024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.403037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.403423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.403435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.403851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.403862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.404134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.404145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.404519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.404530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.404914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.404925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.405312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.405323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.405719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.405730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.406107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.406117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.406501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.406513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.406720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.406732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.407003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.407014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.407437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.407449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.407835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.327 [2024-06-08 01:01:21.407846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.327 qpair failed and we were unable to recover it. 00:36:03.327 [2024-06-08 01:01:21.408229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.408239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.408626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.408636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.409024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.409035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.409425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.409437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.409901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.409912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.410242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.410253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.410637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.410650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.410968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.410979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.411360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.411371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.411750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.411762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.411978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.411990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.412243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.412255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.412646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.412657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.413075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.413085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.413468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.413478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.413883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.413893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.414265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.414277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.414662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.414674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.415056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.415067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.415236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.415246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.415600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.415611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.416011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.416021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.416408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.416419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.416719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.416730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.417111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.417121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.417492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.417503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.417896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.417907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.418298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.418308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.418708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.418719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.419125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.419135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.419522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.419532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.419928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.419939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.420320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.420332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.420669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.420683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.420921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.420933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.421385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.421396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.421803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.328 [2024-06-08 01:01:21.421814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.328 qpair failed and we were unable to recover it. 00:36:03.328 [2024-06-08 01:01:21.422217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.422229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.422615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.422627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.423010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.423020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.423405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.423417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.423818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.423829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.424215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.424225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.424702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.424740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.425129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.425142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.425678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.425716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.426107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.426121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.426494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.426509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.426749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.426761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.427138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.427149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.427533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.427544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.427832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.427842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.428235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.428246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.428634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.428645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.429029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.429040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.429422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.429434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.429715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.429726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.430122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.430132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.430510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.430521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.430833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.430844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.431283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.431293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.431694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.431705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.432087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.432097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.432481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.432492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.432708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.432722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.433099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.433109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.433478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.433489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.433887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.433898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.434281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.434292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.434685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.434696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.435066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.435078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.435428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.435440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.435800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.435810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.436164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.329 [2024-06-08 01:01:21.436175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.329 qpair failed and we were unable to recover it. 00:36:03.329 [2024-06-08 01:01:21.436638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.436649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.437023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.437034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.437407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.437418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.437804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.437814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.438202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.438213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.438595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.438606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.438987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.438998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.439410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.439424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.439821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.439831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.440117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.440128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.440337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.440347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.440726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.440737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.441118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.441128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.441509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.441520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.441737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.441749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.442194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.442205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.442588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.442599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.442990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.443000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.443381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.443392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.443791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.443803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.444178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.444189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.444712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.444750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.445184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.445197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.445681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.445719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.446136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.446149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.446400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.446424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.446716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.446727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.446975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.446990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.447375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.447387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.330 [2024-06-08 01:01:21.447772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.330 [2024-06-08 01:01:21.447783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.330 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.448170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.448182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.448678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.448716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.449122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.449135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.449638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.449676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.450062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.450075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.450481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.450493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.450876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.450887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.451269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.451280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.451651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.451662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.452066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.452077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.452461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.452474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.452862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.452874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.453257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.453267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.453484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.453494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.453570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.453582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.453957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.453968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.454430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.454442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.454794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.454805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.455170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.455180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.455546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.455557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.455939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.455950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.456332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.456342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.456766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.456778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.457149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.457161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.457543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.457558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.457946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.457956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.458366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.458376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.458694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.458705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.459088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.459098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.459474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.459485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.459876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.459887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.460275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.460285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.460686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.460698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.461083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.461094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.461461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.461472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.461880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.461892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.331 qpair failed and we were unable to recover it. 00:36:03.331 [2024-06-08 01:01:21.462278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.331 [2024-06-08 01:01:21.462288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.462530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.462541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.462953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.462963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.463345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.463355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.463740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.463751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.464143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.464153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.464571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.464582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.464973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.464984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.465375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.465386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.465867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.465879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.466249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.466262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.466737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.466775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.467165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.467178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.467658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.467696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.468098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.468111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.468502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.468518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.468913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.468924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.469308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.469319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.469655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.469667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.470042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.470052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.470437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.470448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.470855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.470866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.471273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.471283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.471687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.471699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.472124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.472135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.472521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.472532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.472913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.472925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.473325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.473336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.473720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.473732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.474116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.474128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.474541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.474553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.474938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.474948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.475372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.475382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.475604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.475615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.475992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.476002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.476383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.476393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.332 [2024-06-08 01:01:21.476788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.332 [2024-06-08 01:01:21.476798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.332 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.477201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.477212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.477596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.477608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.478013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.478024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.478276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.478286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.478638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.478649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.479036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.479047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.479433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.479444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.479661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.479673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.480015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.480026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.480437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.480447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.480842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.480852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.481105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.481115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.481410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.481421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.481793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.481804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.482195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.482207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.482592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.482603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.482987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.482998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.483405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.483416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.483836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.483846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.484221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.484234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.484719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.484757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.485030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.485044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.485445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.485457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.485842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.485853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.486238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.486249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.486526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.486537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.486835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.486846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.487227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.487238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.487695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.487707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.488084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.488094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.488460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.488471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.488846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.488856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.489261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.489273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.489658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.489669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.490054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.490064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.490454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.490465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.333 [2024-06-08 01:01:21.490847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.333 [2024-06-08 01:01:21.490858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.333 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.491260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.491272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.491653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.491664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.492048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.492058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.492434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.492445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.492663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.492677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.493066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.493077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.493369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.493380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.493784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.493795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.494203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.494213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.494604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.494618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.495038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.495049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.495431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.495442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.495826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.495837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.496218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.496229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.496611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.496622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.497023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.497034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.497398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.497414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.497771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.497783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.498155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.498167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.498636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.498674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.499079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.499091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.499481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.499493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.499884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.499895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.500146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.500157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.500555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.500566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.500981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.500991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.501373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.501384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.501632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.501645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.502016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.502026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.502418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.502430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.502827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.502839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.503238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.503249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.503737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.503775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.504158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.504171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.504688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.504726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.505191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.334 [2024-06-08 01:01:21.505205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.334 qpair failed and we were unable to recover it. 00:36:03.334 [2024-06-08 01:01:21.505693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.505735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.506126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.506139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.506638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.506676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.507061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.507074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.507484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.507496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.507884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.507894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.508352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.508363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.508573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.508583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.508977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.508987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.509371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.509382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.509778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.509789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.510170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.510182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.510597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.510608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.510994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.511005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.511221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.511237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.511574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.511585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.511992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.512002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.512372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.512383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.512681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.512693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.513064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.513074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.513325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.513336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.513724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.513736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.514046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.514058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.514450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.514461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.514840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.514850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.515264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.515275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.515647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.515658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.516078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.516088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.516493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.516504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.335 qpair failed and we were unable to recover it. 00:36:03.335 [2024-06-08 01:01:21.516867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.335 [2024-06-08 01:01:21.516877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.517286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.517296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.517695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.517706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.518112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.518123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.518505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.518516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.518904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.518914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.519297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.519308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.519693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.519704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.520089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.520099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.520414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.520424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.520787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.520797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.521198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.521208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.521594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.521606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.522029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.522039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.522422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.522433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.522821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.522832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.523215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.523226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.523611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.523623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.523981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.523992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.524394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.524408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.524788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.524798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.525162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.525173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.525550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.525561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.525975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.525986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.526367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.526378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.526796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.526808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.527194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.527205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.527696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.527735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.528136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.528150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.528613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.528651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.529045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.529058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.529464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.529475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.529875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.529886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.530259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.530269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.530673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.530684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.531094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.531104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.336 [2024-06-08 01:01:21.531480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.336 [2024-06-08 01:01:21.531491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.336 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.531876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.531888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.532270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.532281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.532680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.532695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.532906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.532916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.533309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.533320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.533581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.533593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.533972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.533983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.534364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.534374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.534753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.534764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.535165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.535176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.535580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.535591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.536050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.536060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.536507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.536518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.536896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.536907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.537315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.537326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.537720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.537731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.538110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.538121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.538416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.538428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.538813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.538823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.539141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.539151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.539537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.539548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.539958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.539969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.540371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.540381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.540764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.540774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.541156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.541167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.541669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.541707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.541925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.541938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.542109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.542120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.542417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.542429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.542802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.542817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.543218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.543229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.543610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.543621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.543942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.543952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.544332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.544343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.544737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.544748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.337 [2024-06-08 01:01:21.545131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.337 [2024-06-08 01:01:21.545143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.337 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.545524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.545536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.545877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.545888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.546250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.546260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.546572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.546583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.546855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.546865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.547294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.547305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.547598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.547609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.548000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.548010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.548392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.548405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.548813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.548823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.549228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.549238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.549621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.549632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.550014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.550024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.550414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.550427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.550817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.550828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.551210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.551221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.551695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.551734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.552131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.552144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.552668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.552706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.553095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.553108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.553581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.553627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.554038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.554050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.554455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.554467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.554850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.554861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.555129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.555141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.555537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.555548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.555955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.555965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.556343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.556354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.556736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.556746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.557057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.557068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.557487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.557497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.557661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.557673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.558063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.558073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.558457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.558468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.558759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.558769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.338 [2024-06-08 01:01:21.559149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.338 [2024-06-08 01:01:21.559159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.338 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.559543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.559554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.559935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.559945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.560356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.560366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.560749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.560759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.561144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.561154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.561547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.561557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.561992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.562002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.562379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.562389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.562823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.562834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.563234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.563245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.563470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.563491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.563890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.563900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.564287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.564298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.564512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.564525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.564901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.564912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.565295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.565306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.565565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.565576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.565958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.565968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.566336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.566347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.566801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.566813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.567192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.567202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.567592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.567604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.567990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.568001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.568250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.568262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.568644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.568655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.569035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.569049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.569485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.569496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.569872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.569882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.570267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.570277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.570678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.570688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.571051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.571062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.571400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.571416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.571803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.571814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.572124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.572135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.572631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.572669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.573064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.573077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.573362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.339 [2024-06-08 01:01:21.573374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.339 qpair failed and we were unable to recover it. 00:36:03.339 [2024-06-08 01:01:21.573655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.573666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.574025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.574036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.574383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.574394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.574794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.574806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.575056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.575066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.575442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.575453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.575842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.575853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.576237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.576247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.576691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.576703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.576999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.577011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.577414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.577425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.577728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.577738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.578112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.578123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.578531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.578542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.578797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.578807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.579055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.579068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.579431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.579442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.579892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.579903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.580347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.580357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.580740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.580752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.581134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.581145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.581554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.581565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.581946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.581957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.582175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.582187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.582575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.582586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.582948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.582958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.583341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.583352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.583734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.583745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.584125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.584135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.340 qpair failed and we were unable to recover it. 00:36:03.340 [2024-06-08 01:01:21.584498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.340 [2024-06-08 01:01:21.584509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.584880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.584891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.585271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.585282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.585748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.585759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.586051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.586062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.586322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.586333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.586666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.586677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.587063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.587075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.587462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.587474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.587856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.587867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.588241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.588254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.588639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.588651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.589052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.589063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.589446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.589459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.589736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.589747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.590124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.590136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.590540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.590552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.590935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.590945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.591364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.591374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.591665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.591675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.591937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.591948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.592333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.592345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.592739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.592751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.341 [2024-06-08 01:01:21.593135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.341 [2024-06-08 01:01:21.593147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.341 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.593545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.593558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.593955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.593966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.594352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.594363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.595025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.595041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.595448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.595460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.595691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.595703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.596087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.596100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.596488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.596499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.596880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.596891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.597263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.597274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.597654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.597665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.598068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.598079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.598486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.598497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.598869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.598880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.599265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.599276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.599710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.599721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.600035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.600047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.600433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.600446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.600614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.600625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.600994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.601006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.601292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.601303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.601685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.601695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.602078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.602088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.602383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.602393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.613 qpair failed and we were unable to recover it. 00:36:03.613 [2024-06-08 01:01:21.602767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.613 [2024-06-08 01:01:21.602778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.603159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.603170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.603553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.603563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.603859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.603870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.604124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.604135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.604523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.604534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.604927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.604938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.605185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.605197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.605588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.605599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.605988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.605999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.606389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.606400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.606779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.606790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.607202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.607212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.607595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.607606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.607982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.607993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.608204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.608214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.608494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.608505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.608918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.608929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.609315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.609326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.609593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.609605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.609759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.609772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.610179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.610190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.610591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.610602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.611008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.611019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.611419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.611430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.611737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.611748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.612130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.612140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.612531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.612542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.612827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.612838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.613219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.613230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.613596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.613607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.613993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.614004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.614416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.614427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.614863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.614880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.615092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.615104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.615509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.615520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.615934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.615945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.616348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.616359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.616826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.614 [2024-06-08 01:01:21.616837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.614 qpair failed and we were unable to recover it. 00:36:03.614 [2024-06-08 01:01:21.617223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.617235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.617618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.617630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.617896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.617906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.618288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.618299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.618554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.618565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.618801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.618811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.619197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.619207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.619603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.619614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.620000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.620011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.620414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.620425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.620690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.620700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.621079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.621091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.621479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.621489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.621929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.621940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.622189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.622201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.622590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.622601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.623017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.623028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.623433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.623443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.623884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.623895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.624273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.624284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.624677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.624688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.625097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.625110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.625384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.625394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.625770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.625781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.625990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.625999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.626275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.626286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.626670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.626681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.627064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.627075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.627458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.627469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.627857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.627868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.628117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.628128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.628344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.628356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.628731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.628743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.629071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.629082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.629468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.629479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.629882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.629892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.630303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.630314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.630685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.630696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.615 qpair failed and we were unable to recover it. 00:36:03.615 [2024-06-08 01:01:21.630997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.615 [2024-06-08 01:01:21.631006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.631385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.631396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.631775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.631786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.632035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.632045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.632427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.632439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.632831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.632842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.633229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.633240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.633512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.633522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.633909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.633920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.634194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.634204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.634600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.634614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.635020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.635031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.635417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.635428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.635922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.635933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.636318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.636328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.636726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.636737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.637139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.637151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.637538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.637549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.637935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.637945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.638351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.638362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.638748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.638759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.639179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.639190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.639595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.639607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.640022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.640032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.640414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.640426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.640828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.640839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.641223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.641233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.641449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.641461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.641862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.641872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.642091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.642101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.642459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.642469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.642873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.642883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.643258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.643268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.643647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.643658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.644040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.644051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.644459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.644470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.644872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.644883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.645154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.645164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.645546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.616 [2024-06-08 01:01:21.645557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.616 qpair failed and we were unable to recover it. 00:36:03.616 [2024-06-08 01:01:21.645934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.645944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.646201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.646211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.646597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.646608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.646991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.647001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.647288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.647299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.647699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.647710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.647983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.647993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.648416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.648427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.648817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.648827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.649215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.649226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.649506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.649517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.649897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.649908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.650313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.650325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.650592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.650604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.650990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.651000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.651384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.651394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.651794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.651806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.652196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.652208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.652592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.652603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.652873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.652883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.653290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.653300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.653637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.653648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.653892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.653902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.654184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.654195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.654608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.654619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.655005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.655015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.655405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.655416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.655797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.655807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.656213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.656223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.656686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.656723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.657116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.617 [2024-06-08 01:01:21.657129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.617 qpair failed and we were unable to recover it. 00:36:03.617 [2024-06-08 01:01:21.657541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.657554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.657984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.657995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.658420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.658432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.658826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.658836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.659220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.659231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.659612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.659622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.660004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.660014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.660398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.660412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.660735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.660751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.661115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.661125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.661293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.661303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.661580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.661592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.662042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.662052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.662456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.662467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.662869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.662881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.663272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.663282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.663686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.663696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.664102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.664112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.664475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.664486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.664885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.664895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.665279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.665290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.665762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.665773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.666179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.666189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.666538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.666549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.666906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.666916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.667172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.667183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.667593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.667604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.667986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.667996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.668437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.668448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.668839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.668850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.669254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.669264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.669555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.669574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.669977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.669988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.670398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.670412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.670780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.670790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.671185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.671198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.671632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.671671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.672092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.618 [2024-06-08 01:01:21.672105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.618 qpair failed and we were unable to recover it. 00:36:03.618 [2024-06-08 01:01:21.672511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.672522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.672913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.672924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.673309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.673320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.673720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.673732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.674142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.674153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.674486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.674498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.674906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.674917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.675333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.675343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.675640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.675651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.675898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.675908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.676298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.676308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.676692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.676703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.677105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.677115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.677497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.677508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.677919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.677930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.678336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.678346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.678625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.678636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.679021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.679032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.679414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.679425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.679792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.679802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.680174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.680184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.680591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.680602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.680980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.680991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.681394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.681408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.681784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.681795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.682185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.682196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.682577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.682588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.683001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.683012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.683394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.683408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.683812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.683823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.684209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.684219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.684728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.684766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.685155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.685168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.685322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.685332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.685622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.685633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.685998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.686009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.686391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.686406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.686806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.686816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.619 [2024-06-08 01:01:21.687194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.619 [2024-06-08 01:01:21.687205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.619 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.687700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.687739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.688135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.688149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.688529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.688541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.688948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.688959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.689397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.689411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.689775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.689785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.690168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.690178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.690686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.690724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.691133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.691145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.691663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.691701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.692093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.692106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.692354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.692364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.692773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.692784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.693181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.693193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.693715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.693753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.694062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.694075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.694477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.694488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.694764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.694775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.695199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.695210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.695591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.695603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.696005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.696015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.696397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.696413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.696796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.696806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.697192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.697203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.697694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.697732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.698126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.698140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.698525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.698542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.698918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.698929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.699334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.699346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.699806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.699817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.700193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.700203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.700699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.700737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.701139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.701152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.701541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.701553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.701940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.701951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.702332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.702343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.702721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.702732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.702992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.703004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.620 [2024-06-08 01:01:21.703409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.620 [2024-06-08 01:01:21.703420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.620 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.703811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.703822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.704231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.704242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.704713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.704751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.705135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.705147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.705626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.705664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.706069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.706082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.706468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.706480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.706725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.706735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.707136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.707147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.707557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.707568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.707943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.707954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.708337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.708348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.708746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.708757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.709160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.709170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.709544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.709559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.709961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.709971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.710343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.710354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.710727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.710737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.711110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.711122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.711501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.711512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.711914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.711925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.712334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.712344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.712728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.712738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.713120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.713133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.713517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.713528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.713893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.713904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.714177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.714188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.714567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.714578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.714997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.715008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.715255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.715265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.715731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.715742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.716125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.716135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.716518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.716529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.716936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.716946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.717240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.717251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.717660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.717670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.718065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.718075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.718357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.718367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.621 qpair failed and we were unable to recover it. 00:36:03.621 [2024-06-08 01:01:21.718751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.621 [2024-06-08 01:01:21.718761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.719146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.719156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.719634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.719672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.720077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.720090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.720471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.720483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.720866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.720877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.721279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.721291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.721750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.721761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.722156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.722166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.722552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.722563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.722944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.722955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.723357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.723368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.723744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.723755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.724138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.724148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.724537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.724547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.724905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.724916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.725310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.725320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.725723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.725734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.726108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.726118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.726366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.726376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.726759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.726770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.727057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.727067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.727368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.727378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.727791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.727802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.728181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.728192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.728665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.728703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.729096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.729109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.729518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.729529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.729925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.729935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.730319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.730330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.730721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.730733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.731143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.731154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.731537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.731548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.622 qpair failed and we were unable to recover it. 00:36:03.622 [2024-06-08 01:01:21.731928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.622 [2024-06-08 01:01:21.731938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.732361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.732371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.732773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.732784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.733165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.733175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.733588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.733599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.734003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.734013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.734366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.734377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.734600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.734615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.735007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.735018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.735399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.735415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.735793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.735804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.736017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.736030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.736440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.736451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.736852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.736862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.737266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.737276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.737650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.737661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.738035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.738045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.738427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.738439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.738820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.738830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.739212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.739223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.739611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.739622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.739909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.739919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.740287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.740297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.740704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.740714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.741087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.741097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.741480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.741492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.741901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.741911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.742228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.742239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.742610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.742621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.742973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.742983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.743383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.743393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.743795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.743805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.744045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.744055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.744318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.744329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.744583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.744593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.744985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.744995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.745381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.745392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.745778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.745788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.746192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.623 [2024-06-08 01:01:21.746205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.623 qpair failed and we were unable to recover it. 00:36:03.623 [2024-06-08 01:01:21.746582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.746593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.746975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.746986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.747390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.747404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.747767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.747778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.748157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.748168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.748640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.748679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.749073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.749086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.749531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.749542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.749922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.749933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.750315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.750326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.750714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.750725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.751134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.751145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.751527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.751538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.751785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.751795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.752170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.752181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.752546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.752557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.752938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.752948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.753373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.753384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.753770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.753781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.754188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.754198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.754449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.754461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.754715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.754726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.755108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.755118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.755549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.755560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.755961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.755971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.756353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.756364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.756743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.756757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.757164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.757176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.757552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.757563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.757814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.757824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.758126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.758136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.758417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.758427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.758756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.758766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.759146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.759156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.759474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.759484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.759738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.759748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.760137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.760146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.760531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.760542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.624 [2024-06-08 01:01:21.760945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.624 [2024-06-08 01:01:21.760955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.624 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.761360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.761370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.761762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.761773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.762154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.762165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.762577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.762588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.763036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.763047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.763423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.763434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.763830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.763841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.764224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.764235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.764638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.764650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.765032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.765044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.765252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.765262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.765649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.765660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.766016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.766026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.766416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.766427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.766824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.766835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.767219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.767229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.767741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.767780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.768205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.768219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.768701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.768739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.769029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.769045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.769338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.769349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.769732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.769743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.770149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.770159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.770483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.770493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.770872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.770882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.771268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.771279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.771646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.771657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.772038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.772049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.772455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.772467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.772858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.772869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.773248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.773259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.773644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.773654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.774061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.774071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.774287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.774298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.774665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.774677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.775057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.775068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.775481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.775492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.775874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.775884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.625 [2024-06-08 01:01:21.776299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.625 [2024-06-08 01:01:21.776310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.625 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.776682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.776693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.777099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.777110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.777491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.777502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.777901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.777911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.778300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.778311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.778704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.778714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.779095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.779106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.779275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.779285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.779644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.779654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.779784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.779795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.780166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.780177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.780562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.780573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.780970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.780981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.781388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.781399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.781798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.781809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.782195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.782207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.782589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.782602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.783003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.783013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.783390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.783400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.783811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.783821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.784213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.784224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.784657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.784695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.785086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.785099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.785598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.785637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.786031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.786044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.786462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.786473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.786861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.786871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.787088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.787102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.787483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.787494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.787897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.787907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.788294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.788305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.788680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.788691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.789095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.789105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.789522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.789533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.789911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.789923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.790133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.790144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.790545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.790557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.790964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.626 [2024-06-08 01:01:21.790975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.626 qpair failed and we were unable to recover it. 00:36:03.626 [2024-06-08 01:01:21.791379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.791389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.791817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.791829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.792286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.792296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.792745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.792756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.793126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.793136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.793522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.793535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.793813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.793824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.794045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.794059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.794409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.794422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.794798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.794810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.795143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.795155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.795528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.795539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.795923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.795933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.796324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.796335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.796715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.796727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.797137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.797147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.797578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.797589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.797980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.797990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.798298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.798311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.798641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.798652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.799026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.799036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.799421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.799432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.799844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.799854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.800262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.800272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.800662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.800674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.801060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.801072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.801456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.801466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.801840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.801851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.802234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.802244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.802631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.802642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.803022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.803032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.803447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.803457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.803849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.803860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.627 qpair failed and we were unable to recover it. 00:36:03.627 [2024-06-08 01:01:21.804244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.627 [2024-06-08 01:01:21.804255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.804640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.804651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.805054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.805065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.805437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.805449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.805887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.805897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.806278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.806288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.806759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.806770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.807146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.807157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.807552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.807563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.808016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.808027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.808434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.808445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.808860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.808870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.809254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.809265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.809562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.809574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.809944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.809955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.810330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.810340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.810632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.810644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.811014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.811024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.811334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.811345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.811755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.811766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.812151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.812161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.812413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.812425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.812821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.812832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.813220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.813230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.813553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.813564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.813957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.813968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.814380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.814390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.814767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.814778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.815234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.815244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.815736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.815774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.816179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.816193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.816698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.816736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.817127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.817140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.817637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.817675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.818085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.818097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.818486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.818498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.818920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.818931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.819147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.819157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.819551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.628 [2024-06-08 01:01:21.819562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.628 qpair failed and we were unable to recover it. 00:36:03.628 [2024-06-08 01:01:21.819946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.819956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.820339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.820354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.820750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.820761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.821167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.821178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.821549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.821560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.821939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.821950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.822330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.822341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.822725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.822736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.823119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.823130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.823514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.823525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.823927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.823938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.824341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.824351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.824601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.824614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.824992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.825003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.825386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.825396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.825851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.825863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.826237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.826247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.826724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.826762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.827153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.827167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.827678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.827716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.828113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.828125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.828511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.828523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.828919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.828931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.829339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.829351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.829733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.829744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.830104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.830115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.830491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.830502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.830916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.830926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.831309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.831324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.831715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.831726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.832107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.832118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.832527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.832538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.832922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.832933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.833243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.833254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.833630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.833640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.834045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.834056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.834446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.834457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.834861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.834872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.629 [2024-06-08 01:01:21.835254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.629 [2024-06-08 01:01:21.835264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.629 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.835645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.835656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.835905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.835917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.836294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.836304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.836698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.836710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.837116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.837128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.837513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.837525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.837912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.837924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.838325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.838336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.838729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.838740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.839118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.839128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.839515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.839525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.839915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.839926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.840295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.840307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.840702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.840712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.841095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.841106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.841486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.841499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.841873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.841886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.842269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.842279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.842644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.842655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.843068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.843078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.843482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.843493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.843877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.843888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.844277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.844289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.844698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.844709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.845120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.845130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.845518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.845529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.845904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.845915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.846298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.846309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.846687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.846698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.846919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.846929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.847345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.847355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.847735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.847746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.848151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.848162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.848411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.848424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.848799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.848810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.849193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.849203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.849683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.849721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.850117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.850131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.630 [2024-06-08 01:01:21.850521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.630 [2024-06-08 01:01:21.850533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.630 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.850966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.850977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.851381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.851392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.851774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.851786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.852241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.852252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.852739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.852777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.853190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.853204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.853718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.853756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.854147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.854160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.854630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.854668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.855077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.855090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.855477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.855489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.855871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.855883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.856211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.856222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.856613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.856624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.857005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.857016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.857257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.857268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.857680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.857691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.858087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.858097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.858495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.858507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.858900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.858911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.859232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.859243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.859647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.859659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.859909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.859921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.860298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.860309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.860607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.860618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.861010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.861021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.861407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.861417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.861807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.861818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.862203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.862213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.862593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.862605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.862976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.862987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.863370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.863382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.863684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.631 [2024-06-08 01:01:21.863696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.631 qpair failed and we were unable to recover it. 00:36:03.631 [2024-06-08 01:01:21.864100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.864111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.864495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.864506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.864910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.864920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.865304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.865314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.865693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.865704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.866079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.866089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.866474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.866484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.866892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.866903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.867342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.867354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.867563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.867574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.867935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.867946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.868366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.868378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.868783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.868796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.869181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.869192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.869440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.869452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.869861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.869872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.870312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.870323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.870537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.870548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.870887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.870898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.871182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.871193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.871594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.871605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.871877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.871888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.872274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.872285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.872642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.872653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.873061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.873072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.873460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.873471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.873876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.873887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.874284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.874295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.874630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.874641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.875033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.875043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.875429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.875441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.875606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.875617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.875984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.875994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.876380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.876391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.876721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.876733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.877134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.877145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.877549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.877561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.877943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.877953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.878344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.632 [2024-06-08 01:01:21.878355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.632 qpair failed and we were unable to recover it. 00:36:03.632 [2024-06-08 01:01:21.878743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.878756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.879166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.879178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.879466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.879478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.879824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.879834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.880249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.880260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.880659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.880669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.881053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.881064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.881470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.881481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.881893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.881904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.882311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.882321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.882618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.882628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.883011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.883021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.883322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.883333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.883564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.883575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.883827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.883837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.884223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.884234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.884639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.884650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.885060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.885071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.633 [2024-06-08 01:01:21.885410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.633 [2024-06-08 01:01:21.885421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.633 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.885796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.885808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.886199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.886210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.886651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.886662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.887048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.887058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.887443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.887456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.887859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.887870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.888286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.888297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.888705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.888716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.889100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.889111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.889497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.889508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.889920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.889931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.890227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.890238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.890451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.890462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.890782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.890792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.891185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.891196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.891585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.891596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.891980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.891991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.892395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.892410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.892817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.892827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.893211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.893221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.893768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.893805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.894095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.894107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.894502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.894514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.894715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.894726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.895104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.895115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.895498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.895509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.895768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.895778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.896165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.896176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.905 qpair failed and we were unable to recover it. 00:36:03.905 [2024-06-08 01:01:21.896564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.905 [2024-06-08 01:01:21.896575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.896969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.896980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.897387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.897397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.897613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.897627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.897977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.897988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.898371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.898382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.898595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.898607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.899008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.899019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.899271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.899282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.899677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.899688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.900109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.900119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.900493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.900504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.900713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.900724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.900986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.900996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.901368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.901378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.901777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.901788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.902176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.902187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.902395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.902415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.902677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.902689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.902975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.902987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.903364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.903375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.903709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.903723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.904107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.904118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.904524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.904535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.904938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.904949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.905355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.905365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.905747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.905758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.906138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.906149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.906546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.906558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.906955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.906967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.907372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.907382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.907840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.907851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.908240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.908251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.908684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.908722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.909124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.909137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.909640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.909678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.910074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.910088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.910343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.910356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.906 qpair failed and we were unable to recover it. 00:36:03.906 [2024-06-08 01:01:21.910760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.906 [2024-06-08 01:01:21.910773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.911157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.911169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.911542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.911554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.911920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.911931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.912339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.912349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.912731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.912743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.913117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.913128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.913518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.913529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.913941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.913951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.914338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.914349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.914732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.914748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.915133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.915144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.915547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.915558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.915958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.915968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.916285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.916297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.916694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.916705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.917102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.917112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.917426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.917437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.917831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.917841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.918243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.918253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.918662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.918673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.918897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.918909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.919290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.919300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.919679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.919690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.920096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.920107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.920525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.920536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.920928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.920940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.921323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.921334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.921728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.921739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.922124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.922136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.922521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.922532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.922961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.922972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.923228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.923239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.923622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.923632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.924018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.924028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.924413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.924424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.924870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.924881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.925092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.925104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.925505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.907 [2024-06-08 01:01:21.925516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.907 qpair failed and we were unable to recover it. 00:36:03.907 [2024-06-08 01:01:21.925914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.925924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.926323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.926334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.926718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.926729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.927101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.927112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.927495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.927506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.927911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.927921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.928288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.928298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.928714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.928725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.929107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.929117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.929482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.929493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.929875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.929886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.930241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.930252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.930577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.930589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.930975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.930986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.931369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.931379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.931763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.931774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.932154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.932164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.932572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.932583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.932964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.932974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.933371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.933381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.933802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.933812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.934216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.934227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.934701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.934740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.935135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.935149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.935636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.935674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.936080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.936093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.936514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.936526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.936908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.936919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.937303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.937314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.937508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.937518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.937873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.937883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.938266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.938277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.938679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.938691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.939094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.939105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.939521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.939533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.939916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.939927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.940309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.940320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.940692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.940703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.908 [2024-06-08 01:01:21.941087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.908 [2024-06-08 01:01:21.941097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.908 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.941482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.941493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.941887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.941897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.942306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.942316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.942713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.942725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.942882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.942895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.943305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.943317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.943615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.943626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.944042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.944053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.944453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.944464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.944860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.944870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.945276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.945286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.945663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.945674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.945985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.945997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.946410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.946422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.946793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.946803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.947184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.947194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.947573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.947584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.947966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.947977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.948382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.948394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.948795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.948806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.949187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.949198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.949701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.949739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.950142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.950155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.950538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.950551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.950932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.950944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.951326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.951336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.951547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.951558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.951746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.951762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.952139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.952149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.952609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.952620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.952997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.953008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.909 qpair failed and we were unable to recover it. 00:36:03.909 [2024-06-08 01:01:21.953386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.909 [2024-06-08 01:01:21.953397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.953754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.953764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.954150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.954161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.954377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.954392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.954776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.954787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.955170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.955181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.955688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.955726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.956127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.956140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.956517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.956528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.956887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.956897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.957304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.957315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.957683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.957694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.958075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.958086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.958467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.958478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.958901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.958913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.959322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.959333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.959717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.959728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.960153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.960164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.960546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.960556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.960959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.960969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.961300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.961310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.961700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.961710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.962092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.962103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.962505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.962518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.962905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.962915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.963304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.963315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.963724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.963737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.963988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.964000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.964418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.964429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.964819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.964829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.965142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.965152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.965535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.965546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.965947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.965958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.966206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.966217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.966599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.966610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.967021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.967032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.967432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.967443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.967823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.967834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.968237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.910 [2024-06-08 01:01:21.968248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.910 qpair failed and we were unable to recover it. 00:36:03.910 [2024-06-08 01:01:21.968628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.968639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.969021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.969031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.969415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.969425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.969797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.969807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.970204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.970214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.970596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.970607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.970989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.970999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.971410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.971420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.971717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.971729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.972118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.972129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.972520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.972531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.972941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.972951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.973365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.973375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.973774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.973785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.974167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.974178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.974455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.974465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.974862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.974872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.975257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.975268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.975641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.975653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.976037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.976047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.976449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.976460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.976840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.976850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.977237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.977247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.977630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.977642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.978050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.978060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.978310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.978321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.978698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.978709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.979016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.979028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.979384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.979395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.979761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.979772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.980163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.980173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.980556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.980567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.980927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.980937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.981321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.981331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.981718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.981729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.982113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.982125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.982527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.982538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.982925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.982936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.911 [2024-06-08 01:01:21.983248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.911 [2024-06-08 01:01:21.983260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.911 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.983668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.983679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.984082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.984092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.984473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.984484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.984868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.984879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.985262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.985273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.985672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.985682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.986068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.986078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.986476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.986488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.986895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.986906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.987310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.987322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.987609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.987620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.988023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.988033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.988419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.988430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.988814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.988826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.989036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.989046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.989435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.989446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.989863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.989873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.990275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.990285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.990691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.990702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.991090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.991101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.991487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.991499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.991917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.991928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.992309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.992320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.992566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.992579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.993002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.993012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.993414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.993425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.993694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.993704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.993985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.993996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.994275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.994285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.994626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.994637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.995008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.995019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.995404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.995415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.995779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.995790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.996194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.996204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.996636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.996647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.996864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.996875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.997142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.997153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.997516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.997527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.912 [2024-06-08 01:01:21.997967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.912 [2024-06-08 01:01:21.997977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.912 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:21.998353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:21.998363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:21.998737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:21.998750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:21.999162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:21.999173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:21.999559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:21.999569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:21.999971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:21.999982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.000366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.000377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.000619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.000631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.001010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.001022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.001293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.001303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.001584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.001595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.001977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.001988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.002300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.002310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.002701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.002712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.003161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.003172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.003582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.003593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.003980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.003990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.004375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.004386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.004634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.004645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.005046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.005056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.005482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.005493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.005878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.005888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.006273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.006284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.006652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.006663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.007042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.007052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.007436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.007446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.007846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.007857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.008263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.008273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.008645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.008655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.009032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.009046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.009355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.009365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.009759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.009770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.010158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.010168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.010553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.010565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.011002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.011012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.011381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.011391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.011779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.011790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.012179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.012189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.012673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.012711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.913 qpair failed and we were unable to recover it. 00:36:03.913 [2024-06-08 01:01:22.013076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.913 [2024-06-08 01:01:22.013090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.013338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.013350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.013805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.013817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.014195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.014206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.014703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.014741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.015132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.015145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.015665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.015703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.016094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.016107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.016524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.016536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.016852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.016862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.017181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.017192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.017595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.017607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.018037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.018047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.018429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.018440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.018757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.018767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.019032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.019042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.019418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.019429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.019819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.019829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.020220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.020231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.020612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.020623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.021029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.021040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.021420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.021431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.021891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.021902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.022286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.022296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.022689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.022701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.023085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.023097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.023486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.023496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.023932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.023942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.024312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.024323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.024708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.024718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.025101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.025111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.025359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.025369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.025786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.025798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.914 qpair failed and we were unable to recover it. 00:36:03.914 [2024-06-08 01:01:22.026241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.914 [2024-06-08 01:01:22.026251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.026628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.026639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.027034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.027045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.027448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.027459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.027833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.027844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.028171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.028183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.028565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.028576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.028766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.028780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.029176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.029187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.029395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.029419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.029815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.029826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.030266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.030277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.030643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.030682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.031076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.031089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.031477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.031489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.031867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.031877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.032258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.032269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.032639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.032651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.032940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.032951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.033350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.033360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.033734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.033745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.034171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.034182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.034567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.034578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.034985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.034996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.035384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.035394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.035783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.035798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.036117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.036128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.036633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.036671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.036959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.036973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.037360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.037371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.037777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.037788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.038184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.038194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.038580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.038591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.038966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.038977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.039351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.039362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.039772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.039783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.040249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.040260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.040734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.040772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.041164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.915 [2024-06-08 01:01:22.041178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.915 qpair failed and we were unable to recover it. 00:36:03.915 [2024-06-08 01:01:22.041655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.041693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.042084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.042097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.042481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.042493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.042877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.042889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.043168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.043178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.043567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.043578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.043952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.043962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.044275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.044285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.044672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.044683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.045064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.045074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.045468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.045478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.045854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.045865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.046084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.046097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.046491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.046507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.046791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.046801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.047202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.047213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.047618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.047629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.048012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.048022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.048448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.048459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.048676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.048686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.049077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.049088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.049477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.049488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.049870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.049881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.050264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.050274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.050579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.050590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.050980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.050990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.051380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.051391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.051644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.051655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.052080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.052090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.052464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.052475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.052861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.052871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.053118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.053129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.053425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.053435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.053823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.053834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.054218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.054228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.054623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.054635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.055042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.055052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.055432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.055444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.916 [2024-06-08 01:01:22.055798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.916 [2024-06-08 01:01:22.055808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.916 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.056194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.056204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.056482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.056493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.056878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.056888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.057269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.057279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.057683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.057694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.058092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.058102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.058485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.058496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.058903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.058914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.059295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.059306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.059689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.059700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.060084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.060095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.060484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.060495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.060844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.060854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.061256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.061266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.061645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.061656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.062038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.062049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.062431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.062442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.062818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.062829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.063211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.063221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.063609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.063619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.064081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.064092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.064463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.064474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.064722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.064732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.065114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.065124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.065510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.065520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.065928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.065938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.066323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.066333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.066710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.066722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.067013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.067024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.067405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.067417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.067712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.067723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.068112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.068122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.068523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.068534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.068945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.068955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.069268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.069279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.069661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.069671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.070055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.070065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.070435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.070446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.917 [2024-06-08 01:01:22.070829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.917 [2024-06-08 01:01:22.070840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.917 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.071224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.071235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.071601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.071612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.072001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.072011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.072374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.072387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.072771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.072782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.073163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.073174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.073738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.073777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.074171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.074184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.074578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.074589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.074897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.074909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.075119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.075130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.075475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.075485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.075870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.075880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.076265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.076275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.076661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.076672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.077059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.077069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.077440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.077450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.077731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.077743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.078159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.078169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.078545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.078557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.078946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.078957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.079342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.079352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.079735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.079746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.080158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.080169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.080357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.080369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.080728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.080739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.081147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.081158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.081634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.081645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.082031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.082041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.082427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.082438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.082896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.082909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.083286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.083296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.083766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.083778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.918 qpair failed and we were unable to recover it. 00:36:03.918 [2024-06-08 01:01:22.084183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.918 [2024-06-08 01:01:22.084195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.084444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.084456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.084851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.084861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.085246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.085257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.085719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.085730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.085945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.085958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.086259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.086270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.086643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.086655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.087041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.087052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.087452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.087463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.087652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.087662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.088074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.088085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.088459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.088470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.088787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.088797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.089093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.089103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.089493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.089504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.089913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.089923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.090328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.090338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.090721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.090732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.091184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.091195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.091569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.091580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.091867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.091877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.092289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.092300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.092513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.092526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.092925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.092939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.093149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.093161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.093541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.093552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.093934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.093944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.094346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.094358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.094730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.094742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.095123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.095134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.095506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.095517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.095714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.095724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.096084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.096094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.096495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.096506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.096877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.096887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.097261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.097271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.097647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.097658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.098041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.919 [2024-06-08 01:01:22.098052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.919 qpair failed and we were unable to recover it. 00:36:03.919 [2024-06-08 01:01:22.098310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.098321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.098710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.098721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.099122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.099132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.099521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.099532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.099912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.099923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.100305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.100316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.100693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.100703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.101089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.101100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.101485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.101496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.101901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.101911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.102181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.102191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.102572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.102583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.102894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.102905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.103293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.103303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.103699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.103709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.104127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.104138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.104521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.104532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.104911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.104922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.105326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.105337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.105705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.105716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.106105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.106116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.106499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.106512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.106875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.106887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.107262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.107273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.107662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.107674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.108059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.108071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.108466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.108478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.108756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.108767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.109056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.109067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.109454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.109466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.109838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.109849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.110236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.110247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.110634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.110646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.111032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.111044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.111439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.111451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.111737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.111748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.112054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.112065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.112350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.112361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.920 [2024-06-08 01:01:22.112743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.920 [2024-06-08 01:01:22.112755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.920 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.113145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.113156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.113440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.113452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.113840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.113851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.114246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.114257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.114642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.114653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.115082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.115093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.115480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.115493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.115887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.115898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.116276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.116288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.116668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.116679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.117056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.117068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.117354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.117366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.117792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.117804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.118050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.118061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.118328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.118342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.118595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.118607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.118991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.119003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.119386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.119398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.119779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.119791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.120106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.120117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.120502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.120514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.120907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.120919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.121303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.121314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.121694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.121706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.122109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.122120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.122505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.122515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.122796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.122808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.123102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.123112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.123494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.123505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.123902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.123913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.124298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.124309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.124689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.124701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.125110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.125121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.125510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.125521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.125917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.125928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.126177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.126188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.126567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.126578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.126871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.126882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.921 [2024-06-08 01:01:22.127161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.921 [2024-06-08 01:01:22.127171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.921 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.127524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.127535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.127933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.127944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.128221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.128233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.128512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.128522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.128929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.128940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.129329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.129340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.129720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.129731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.130144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.130155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.130527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.130538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.130925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.130935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.131212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.131222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.131438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.131449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.131876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.131888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.132275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.132285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.132691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.132702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.132948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.132958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.133360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.133370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.133823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.133834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.134215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.134225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.134602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.134613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.135012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.135023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.135302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.135313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.135713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.135724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.136107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.136118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.136375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.136387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.136771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.136782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.137158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.137169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.137623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.137634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.138010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.138022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.138433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.138443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.138689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.138699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.139080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.139091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.139345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.139356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.139732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.139744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.140185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.140196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.140601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.140612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.140824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.140834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.141184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.141195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.922 qpair failed and we were unable to recover it. 00:36:03.922 [2024-06-08 01:01:22.141587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.922 [2024-06-08 01:01:22.141598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.141979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.141991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.142389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.142408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.142807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.142817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.143071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.143081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.143471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.143482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.143888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.143898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.144287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.144299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.144548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.144559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.144825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.144835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.145203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.145214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.145602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.145612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.145999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.146010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.146393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.146407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.146796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.146808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.147193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.147204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.147588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.147599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.147773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.147783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.148183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.148195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.148587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.148597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.148979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.148990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.149375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.149387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.149551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.149562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.149951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.149962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.150349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.150361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.150735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.150747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.151170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.151181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.151589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.151600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.151985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.151995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.152282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.152294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.152608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.152619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.153004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.153014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.153396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.153411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.153717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.923 [2024-06-08 01:01:22.153729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.923 qpair failed and we were unable to recover it. 00:36:03.923 [2024-06-08 01:01:22.154119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.154130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.154514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.154524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.154871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.154883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.155267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.155278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.155674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.155685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.156070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.156081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.156469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.156481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.156895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.156905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.157310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.157320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.157637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.157649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.157994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.158005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.158117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.158127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.158507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.158517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.158715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.158728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.159085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.159096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.159472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.159483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.159977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.159987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.160390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.160400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.160777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.160788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.161162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.161172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.161570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.161581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.161791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.161801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.162194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.162205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.162590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.162601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.162870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.162880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.163262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.163276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.163689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.163700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.164103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.164114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.164478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.164490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.164914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.164925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.165364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.165375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.165760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.165771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.166186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.166196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.166690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.166728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.167119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.167133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.167517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.167530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.167893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.167904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.168158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.168170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.924 [2024-06-08 01:01:22.168510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.924 [2024-06-08 01:01:22.168521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.924 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.168922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.168933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.169340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.169351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.169746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.169757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.170140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.170152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.170467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.170477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.170834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.170844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.171225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.171236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.171454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.171465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.171880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.171890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.172294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.172304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.172693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.172707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.173096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.173108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.173523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.173533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.173931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.173944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.174328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.174339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.174719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.174731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.175115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.175125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.175541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.175552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.175980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.175991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.176370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.176380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.176767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.176778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:03.925 [2024-06-08 01:01:22.177187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.925 [2024-06-08 01:01:22.177199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:03.925 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.177629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.177642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.178030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.178041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.178434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.178445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.178858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.178869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.179235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.179246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.179462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.179475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.179860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.179871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.180272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.180283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.180690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.180701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.181090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.181101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.181485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.181496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.181912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.181922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.182304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.182315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.182714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.182726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.183038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.183049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.183455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.183467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.183856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.183867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.184252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.184263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.184648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.184659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.184911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.184923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.185301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.185312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.185638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.185650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.186033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.186043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.186451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.186462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.186845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.198 [2024-06-08 01:01:22.186855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.198 qpair failed and we were unable to recover it. 00:36:04.198 [2024-06-08 01:01:22.187237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.187247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.187635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.187645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.188046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.188057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.188437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.188448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.188830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.188842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.189232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.189243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.189603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.189614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.189992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.190003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.190377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.190388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.190749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.190760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.191153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.191163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.191552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.191563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.191907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.191918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.192315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.192326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.192736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.192747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.193180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.193191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.193588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.193598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.193974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.193985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.194356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.194366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.194791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.194802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.195185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.195196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.195674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.195712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.196117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.196129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.196517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.196528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.196921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.196932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.197109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.197119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.197467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.197478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.197849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.197860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.198231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.198241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.198621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.198632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.199036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.199048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.199434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.199447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.199830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.199840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.200228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.200238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.200627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.200643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.200994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.201004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.201458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.201469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.201880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.201891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.199 [2024-06-08 01:01:22.202298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.199 [2024-06-08 01:01:22.202308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.199 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.202709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.202721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.203109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.203120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.203505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.203518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.203930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.203941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.204221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.204232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.204628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.204640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.205082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.205092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.205469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.205479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.205868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.205881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.206279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.206289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.206693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.206705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.207117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.207128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.207515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.207527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.207913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.207925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.208330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.208340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.208721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.208732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.209045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.209056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.209369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.209380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.209744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.209755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.210160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.210170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.210551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.210562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.210962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.210973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.211260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.211272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.211533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.211544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.211957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.211968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.212352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.212363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.212749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.212761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.213159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.213171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.213585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.213596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.213978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.213989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.214197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.214207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.214573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.214584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.214959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.214970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.215346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.215356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.215564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.215574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.215962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.215973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.216399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.216413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.216798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.200 [2024-06-08 01:01:22.216809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.200 qpair failed and we were unable to recover it. 00:36:04.200 [2024-06-08 01:01:22.217190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.217201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.217590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.217601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.218002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.218014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.218395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.218412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.218779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.218790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.219040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.219051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.219434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.219445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.219817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.219828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.220210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.220220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.220579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.220590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.220844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.220857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.221242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.221252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.221638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.221649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.222050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.222061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.222377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.222387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.222766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.222777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.223210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.223220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.223717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.223755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.224146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.224159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.224543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.224555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.224961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.224972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.225348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.225358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.225735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.225746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.225965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.225978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.226344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.226354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.226729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.226740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.227165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.227175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.227560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.227572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.227955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.227966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.228373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.228385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.228762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.228773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.229158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.229169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.229625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.229662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.230070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.230083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.230422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.230433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.230827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.230838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.231220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.231231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.231479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.231490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.231875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.201 [2024-06-08 01:01:22.231885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.201 qpair failed and we were unable to recover it. 00:36:04.201 [2024-06-08 01:01:22.232273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.232285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.232688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.232700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.233105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.233116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.233500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.233512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.233895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.233906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.234121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.234135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.234520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.234531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.234933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.234943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.235329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.235339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.235730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.235741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.236105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.236116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.236434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.236445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.236834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.236844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.237226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.237239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.237645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.237657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.238038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.238049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.238433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.238444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.238840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.238850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.239285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.239296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.239683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.239694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.240071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.240081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.240456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.240467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.240846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.240857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.241237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.241248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.241497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.241509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.241918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.241928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.242335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.242346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.242733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.242745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.243128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.243139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.243519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.243530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.243938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.243948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.202 [2024-06-08 01:01:22.244329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.202 [2024-06-08 01:01:22.244339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.202 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.244700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.244711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.245111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.245123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.245529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.245539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.245961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.245971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.246385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.246396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.246781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.246793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.247206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.247217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.247697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.247735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.248126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.248142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.248540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.248550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.248949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.248959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.249343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.249351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.249741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.249751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.250136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.250145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.250552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.250561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.250902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.250911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.251282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.251291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.251691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.251703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.252107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.252118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.252408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.252419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.252818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.252829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.253215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.253226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.253713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.253751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.254145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.254158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.254639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.254679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.255071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.255085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.255298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.255310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.255669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.255681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.256066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.256077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.256535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.256547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.256943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.256954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.257339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.257351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.257727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.257739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.258144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.258155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.258562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.258574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.258957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.258972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.259368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.259380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.259762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.203 [2024-06-08 01:01:22.259774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.203 qpair failed and we were unable to recover it. 00:36:04.203 [2024-06-08 01:01:22.260177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.260188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.260571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.260583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.260976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.260987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.261371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.261383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.261768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.261780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.262169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.262180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.262670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.262708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.263092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.263105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.263517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.263529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.263917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.263928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.264198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.264209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.264600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.264611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.265024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.265035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.265439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.265450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.265845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.265856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.266241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.266252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.266632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.266643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.267046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.267057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.267334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.267345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.267747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.267758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.268169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.268180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.268385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.268395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.268808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.268820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.269193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.269203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.269675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.269713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.269971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.269983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.270194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.270205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.270464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.270476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.270854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.270866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.271250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.271261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.271738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.271750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.272124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.272135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.272537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.272548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.272930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.272940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.273324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.273334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.273722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.273733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.274149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.274160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.274542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.274553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.274966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.204 [2024-06-08 01:01:22.274979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.204 qpair failed and we were unable to recover it. 00:36:04.204 [2024-06-08 01:01:22.275359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.275369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.275627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.275639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.275888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.275899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.276282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.276293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.276749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.276760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.277168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.277178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.277551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.277563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.277947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.277958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.278339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.278350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.278759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.278770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.279157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.279169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.279548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.279559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.279932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.279943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.280353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.280364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.280789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.280800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.281216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.281226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.281630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.281668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.282084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.282096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.282531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.282542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.282872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.282884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.283272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.283282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.283533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.283543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.283952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.283963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.284379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.284390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.284778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.284789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.285191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.285202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.285586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.285601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.285990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.286002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.286252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.286263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.286647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.286658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.287031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.287042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.287430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.287441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.287727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.287738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.288143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.288153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.288582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.288593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.288982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.288992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.289359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.289369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.289579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.289590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.205 [2024-06-08 01:01:22.289974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.205 [2024-06-08 01:01:22.289984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.205 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.290370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.290381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.290767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.290778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.291190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.291201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.291612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.291624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.292006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.292017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.292399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.292412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.292816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.292828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.293204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.293214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.293713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.293751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.294141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.294154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.294613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.294651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.295019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.295032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.295431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.295442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.295709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.295719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.296125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.296140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.296430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.296441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.296840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.296850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.297234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.297244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.297495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.297507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.297838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.297849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.298255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.298266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.298642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.298653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.299057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.299069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.299383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.299394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.299778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.299788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.300167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.300178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.300679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.300717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.301111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.301123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.301589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.301601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.301981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.301991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.302307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.302317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.302583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.302594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.303019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.303029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.303432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.303443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.303853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.303864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.304246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.304258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.304647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.304658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.305071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.305082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.206 [2024-06-08 01:01:22.305522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.206 [2024-06-08 01:01:22.305533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.206 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.305729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.305739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.306125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.306136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.306436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.306448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.306858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.306868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.307086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.307096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.307472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.307482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.307874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.307885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.308289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.308299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.308693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.308704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.308925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.308938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.309352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.309363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.309755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.309767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.310155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.310165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.310549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.310561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.310959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.310970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.311374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.311385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.311845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.311856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.312300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.312311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.312714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.312724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.313141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.313152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.313403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.313415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.313825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.313836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.314230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.314240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.314751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.314789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.315211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.315224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.315708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.315746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.316144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.316157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.316653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.316691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.317106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.317119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.317595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.317633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.318055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.318068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.318476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.318488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.207 [2024-06-08 01:01:22.318782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.207 [2024-06-08 01:01:22.318794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.207 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.319186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.319197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.319578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.319590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.320009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.320020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.320404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.320416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.320686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.320697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.320979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.320989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.321368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.321378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.321769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.321779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.322162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.322173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.322561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.322572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.322984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.322998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.323386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.323396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.323652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.323662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.324043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.324054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.324440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.324451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.324834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.324845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.325223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.325233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.325617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.325628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.326033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.326043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.326463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.326474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.326856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.326867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.327250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.327260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.327661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.327672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.328060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.328072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.328321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.328332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.328728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.328739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.329151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.329162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.329544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.329555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.329938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.329948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.330330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.330341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.330600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.330611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.330994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.331004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.331387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.331397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.331851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.331862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.332251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.332261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.332735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.332774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.333082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.333095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.333379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.333395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.208 [2024-06-08 01:01:22.333658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.208 [2024-06-08 01:01:22.333669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.208 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.334054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.334065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.334453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.334466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.334867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.334877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.335109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.335119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.335492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.335503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.335902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.335913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.336328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.336338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.336719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.336731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.337118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.337130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.337519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.337531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.337964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.337974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.338354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.338364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.338772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.338783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.339159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.339170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.339546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.339557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.339918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.339929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.340314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.340326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.340712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.340722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.341106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.341118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.341205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.341219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.341478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.341489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.341780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.341790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.342187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.342197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.342612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.342623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.343006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.343017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.343396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.343424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.343837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.343847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.344219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.344230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.344477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.344487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.344874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.344885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.345339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.345350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.345733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.345744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.346110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.346121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.346576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.346587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.346966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.346977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.209 [2024-06-08 01:01:22.347345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.209 [2024-06-08 01:01:22.347357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.209 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.347750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.347760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.348147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.348158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.348539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.348549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.348960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.348971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.349356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.349366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.349763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.349773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.350157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.350167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.350571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.350582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.350969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.350980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.351262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.351272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.351648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.351659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.352013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.352024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.352408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.352420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.352832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.352842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.353149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.353159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.353668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.353706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.354103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.354116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.354597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.354634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.355024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.355037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.355447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.355459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.355845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.355856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.356239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.356250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.356634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.356645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.356936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.356946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.357406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.357417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.357690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.357700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.357960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.357972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.358282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.358293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.358699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.358710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.359022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.359033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.359484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.359499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.359811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.359823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.360217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.360228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.360613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.360624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.361006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.361017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.361387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.361397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.361775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.361786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.362168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.362178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.210 qpair failed and we were unable to recover it. 00:36:04.210 [2024-06-08 01:01:22.362583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.210 [2024-06-08 01:01:22.362595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.363002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.363014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.363390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.363404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.363689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.363699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.364109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.364120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.364371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.364380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.364767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.364778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.365067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.365077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.365505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.365516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.365875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.365886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.366259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.366270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.366645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.366656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.366998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.367009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.367415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.367426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.367701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.367712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.368032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.368043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.368150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.368164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.368559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.368570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.368860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.368870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.369234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.369247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.369625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.369636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.370042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.370052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.370435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.370446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.370758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.370769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.371052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.371062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.371485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.371496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.371879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.371890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.372136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.372147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.372439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.372450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.372847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.372858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.373107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.373118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.373506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.373516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.373976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.373987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.374395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.374415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.374763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.374773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.375162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.375173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.375558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.375569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.375984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.375995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.376380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.211 [2024-06-08 01:01:22.376391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.211 qpair failed and we were unable to recover it. 00:36:04.211 [2024-06-08 01:01:22.376784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.376796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.377179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.377191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.377710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.377749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.378144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.378158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.378551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.378563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.378935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.378946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.379267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.379278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.379646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.379662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.380053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.380064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.380606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.380644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.381049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.381062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.381526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.381538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.381809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.381821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.382104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.382114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.382493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.382504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.382781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.382791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.383180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.383190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.383509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.383521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.383876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.383886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.384291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.384302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.384637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.384648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.384927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.384939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.385156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.385167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.385510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.385521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.385815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.385828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.386206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.386217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.386610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.386621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.386997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.387008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.387384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.387395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.387770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.387781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.388193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.388204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.388592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.388603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.212 [2024-06-08 01:01:22.389067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.212 [2024-06-08 01:01:22.389078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.212 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.389453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.389464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.389846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.389857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.390110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.390121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.390503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.390514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.390902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.390912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.391314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.391324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.391732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.391743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.392132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.392142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.392519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.392531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.392918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.392929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.393305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.393316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.393694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.393706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.394084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.394094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.394502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.394513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.394896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.394906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.395286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.395297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.395656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.395667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.396049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.396060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.396442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.396453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.396843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.396853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.397130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.397140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.397525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.397536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.397867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.397878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.398188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.398199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.398569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.398580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.398979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.398989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.399364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.399374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.399762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.399773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.400008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.400018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.400394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.400408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.400713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.400724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.401107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.401118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.401504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.401516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.401824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.401835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.402225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.402235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.402625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.402635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.403044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.403054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.403466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.213 [2024-06-08 01:01:22.403477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.213 qpair failed and we were unable to recover it. 00:36:04.213 [2024-06-08 01:01:22.403865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.403876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.404253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.404263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.404719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.404730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.405150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.405161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.405668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.405710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.406014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.406028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.406306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.406317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.406610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.406623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.406995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.407006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.407428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.407440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.407853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.407863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.408266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.408277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.408665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.408676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.409066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.409076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.409461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.409472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.409924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.409935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.410315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.410325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.410710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.410721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.411108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.411119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.411372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.411384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.411768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.411779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.412215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.412225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.412706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.412744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.413148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.413162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.413547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.413559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.413963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.413974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.414223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.414233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.414608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.414619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.415002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.415013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.415396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.415410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.415779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.415790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.416197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.416212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.416760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.416799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.417191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.417205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.417433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.417449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.417715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.417726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.418105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.418115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.418484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.418496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.418798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.214 [2024-06-08 01:01:22.418810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.214 qpair failed and we were unable to recover it. 00:36:04.214 [2024-06-08 01:01:22.419061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.419071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.419455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.419466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.419675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.419685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.420111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.420122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.420496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.420506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.420906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.420917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.421169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.421181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.421590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.421601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.421965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.421976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.422360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.422370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.422743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.422754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.423168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.423179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.423388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.423399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.423680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.423691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.424106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.424116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.424503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.424514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.424881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.424892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.425264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.425274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.425681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.425692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.426053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.426069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.426520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.426533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.426927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.426939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.427319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.427332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.427722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.427734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.428139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.428150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.428534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.428545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.428959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.428970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.429352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.429362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.429772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.429784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.430171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.430181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.430567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.430578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.430886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.430897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.431308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.431318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.431723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.431734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.432118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.432128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.432521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.432532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.432897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.432908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.433295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.433307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.433621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.215 [2024-06-08 01:01:22.433632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.215 qpair failed and we were unable to recover it. 00:36:04.215 [2024-06-08 01:01:22.434021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.434032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.434435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.434447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.434854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.434865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.435297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.435307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.435713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.435724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.436132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.436143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.436600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.436611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.436989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.436999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.437382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.437393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.437791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.437802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.438184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.438195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.438592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.438604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.439042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.439053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.439460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.439471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.439681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.439691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.440083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.440093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.440369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.440381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.440790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.440801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.441184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.441194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.441582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.441593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.441978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.441988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.442400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.442417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.442770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.442781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.443168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.443179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.443684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.443722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.444128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.444142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.444625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.444663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.445048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.445062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.445480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.445492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.445743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.445753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.446133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.446143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.446527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.446538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.446907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.216 [2024-06-08 01:01:22.446917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.216 qpair failed and we were unable to recover it. 00:36:04.216 [2024-06-08 01:01:22.447286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.447297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.447697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.447709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.448101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.448112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.448396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.448421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.448812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.448822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.449095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.449105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.449485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.449495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.449901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.449912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.450315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.450326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.450729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.450740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.451128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.451138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.451533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.451544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.451909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.451920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.452306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.452317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.452751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.452762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.453164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.453179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.453255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.453265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.453656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.453668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.453861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.453872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.454215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.454225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.454611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.454621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.454887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.454899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.455291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.455302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.455706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.455717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.456102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.456114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.456488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.456499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.456875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.456886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.457272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.457282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.457677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.457688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.458099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.458110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.458495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.458506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.458894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.458904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.459289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.459300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.459690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.459701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.459998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.460010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.460382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.460393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.460716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.460727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.461111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.461123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.461506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.217 [2024-06-08 01:01:22.461517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.217 qpair failed and we were unable to recover it. 00:36:04.217 [2024-06-08 01:01:22.461903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.461913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.462297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.462307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.462600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.462611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.462997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.463009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.463392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.463409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.463778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.463790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.464206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.464217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.464612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.464623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.465011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.465022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.465271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.465281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.465685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.465695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.466081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.466092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.466557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.466568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.466841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.466852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.467228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.467238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.467639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.467650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.468042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.468052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.468441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.468453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.468836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.468847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.218 [2024-06-08 01:01:22.469262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.218 [2024-06-08 01:01:22.469272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.218 qpair failed and we were unable to recover it. 00:36:04.490 [2024-06-08 01:01:22.469699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-06-08 01:01:22.469712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-06-08 01:01:22.470004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-06-08 01:01:22.470016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-06-08 01:01:22.470509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-06-08 01:01:22.470519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-06-08 01:01:22.470943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-06-08 01:01:22.470954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-06-08 01:01:22.471330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-06-08 01:01:22.471340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.471750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.471760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.472133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.472145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.472553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.472565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.472988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.472999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.473381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.473392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.473648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.473659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.474044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.474056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.474442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.474453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.474838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.474849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.475254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.475265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.475650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.475661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.476057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.476067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.476452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.476463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.476844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.476855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.477237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.477249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.477633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.477644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.478026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.478037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.478451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.478462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.478755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.478765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.479147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.479158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.479552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.479562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.479967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.479978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.480395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.480410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.480797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.480809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.481191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.481204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.481616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.481627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.482003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.482015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.482398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.482412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.482822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.482833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.483243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.483254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.483368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.483378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.483789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.483800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.484201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.484212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.484717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.484755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.484962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.484975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.485317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.485328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.485595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.485608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-06-08 01:01:22.485997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-06-08 01:01:22.486012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.486420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.486432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.486818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.486830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.487239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.487249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.487662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.487674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.488041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.488052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.488436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.488447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.488834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.488845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.489247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.489257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.489575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.489591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.489987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.489998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.490433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.490445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.490833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.490843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.491227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.491237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.491622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.491632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.492016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.492026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.492434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.492445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.492742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.492753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.493140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.493151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.493535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.493548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.493922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.493935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.494312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.494323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.494651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.494662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.495047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.495057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.495464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.495475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.495859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.495870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.496252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.496263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.496644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.496655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.497061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.497072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.497437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.497448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.497831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.497843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.498224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.498235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.498637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.498650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.498923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.498933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.499322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.499332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.499719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.499730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.500088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.500101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.500485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.500495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.500924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-06-08 01:01:22.500935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-06-08 01:01:22.501319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.501330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.501712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.501723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.502175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.502187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.502565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.502576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.502784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.502794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.503178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.503188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.503437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.503448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.503853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.503863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.504259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.504269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.504676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.504687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.504955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.504965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.505216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.505227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.505617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.505628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.506040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.506051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.506441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.506452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.506834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.506845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.507252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.507262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.507654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.507665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.508048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.508058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.508464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.508476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.508907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.508918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.509129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.509140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.509345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.509356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.509701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.509713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.510097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.510107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.510516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.510527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.510818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.510830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.511212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.511222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.511606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.511616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.512022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.512032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.512434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.512445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.512896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.512907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.513290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.513302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.513700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.513711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.514094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.514106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.514487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.514499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.514906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.514917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.515319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.515331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-06-08 01:01:22.515718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-06-08 01:01:22.515730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.515980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.515991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.516368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.516380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.516723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.516734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.517118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.517129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.517372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.517382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.517832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.517843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.518245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.518255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.518691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.518729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.519122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.519135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.519521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.519534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.519940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.519952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.520334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.520345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.520754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.520764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.521197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.521209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.521700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.521738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.522112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.522125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.522519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.522531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.522741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.522751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.523125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.523136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.523598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.523609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.524015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.524026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.524435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.524446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.524817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.524828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.525218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.525231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.525617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.525628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.526053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.526064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.526377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.526393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.526719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.526731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.527137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.527149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.527535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.527546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.527920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.527931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.528334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.528346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.528756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.528767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.529151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.529163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.529555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.529566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.529987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.529999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.530374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.530385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.530758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-06-08 01:01:22.530770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-06-08 01:01:22.531174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.531186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.531682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.531721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.532110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.532124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.532510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.532522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.532928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.532938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.533320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.533332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.533721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.533732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.534103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.534114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.534491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.534502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.534878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.534890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.535274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.535287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.535691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.535702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.535923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.535937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.536331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.536342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.536728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.536740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.537122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.537137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.537542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.537553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.537940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.537951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.538334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.538346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.538735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.538747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.539155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.539165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.539548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.539560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.539943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.539954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.540363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.540374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.540761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.540773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.541169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.541180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.541590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.541600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.541890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.541901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.542111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.542122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.542517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.542528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.542792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.542802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.543203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-06-08 01:01:22.543213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-06-08 01:01:22.543603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.543614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.543997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.544008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.544399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.544414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.544680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.544691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.545081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.545093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.545470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.545481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.545867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.545877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.546296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.546308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.546686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.546697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.547088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.547099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.547490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.547504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.547824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.547835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.548206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.548217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.548610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.548622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.548827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.548837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.549196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.549206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.549588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.549599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.549992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.550003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.550387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.550399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.550816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.550827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.551218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.551229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.551610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.551622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.552009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.552021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.552418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.552436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.552817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.552828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.553212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.553222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.553611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.553622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.553905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.553916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.554242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.554253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.554574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.554586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.554993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.555003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.555391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.555405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.555800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.555810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.556198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.556208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.556535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.556546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.556914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.556925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.557321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.557332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.557715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.557726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-06-08 01:01:22.557948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-06-08 01:01:22.557960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.558295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.558305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.558706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.558717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.559102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.559113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.559500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.559511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.559884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.559895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.560277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.560287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.560690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.560701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.561171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.561181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.561564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.561575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.561930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.561941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.562330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.562341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.562728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.562739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.562949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.562960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.563354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.563366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.563785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.563797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.564183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.564194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.564576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.564587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.564992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.565003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.565386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.565398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.565637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.565649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.566030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.566041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.566454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.566465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.566673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.566684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.567036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.567047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.567434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.567445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.567858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.567870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.568125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.568136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.568480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.568491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.568900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.568911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.569319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.569330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.569717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.569728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.570115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.570127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.570509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.570520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.570881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.570893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.571277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.571287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.571573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.571583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.572015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.572026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.572432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-06-08 01:01:22.572444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-06-08 01:01:22.572833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.572844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.573230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.573243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.573629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.573640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.574057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.574068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.574454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.574465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.574852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.574863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.575245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.575256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.575657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.575668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.576065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.576076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.576461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.576473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.576873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.576885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.577298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.577310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.577713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.577724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.578108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.578118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.578368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.578378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.578788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.578800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.579184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.579195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.579587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.579598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.580002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.580013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.580426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.580437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.580729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.580740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.581137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.581148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.581538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.581549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.581972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.581984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.582235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.582245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.582636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.582648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.583030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.583041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.583449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.583460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.583737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.583751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.584136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.584147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.584596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.584607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.584939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.584950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.585347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.585358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.585747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.585758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.586142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.586152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.586524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.586535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.586917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.586927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.587174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.587184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.587586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-06-08 01:01:22.587596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-06-08 01:01:22.587974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.587986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.588391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.588408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.588804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.588815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.589216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.589227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.589723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.589761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.589969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.589982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.590375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.590386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.590762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.590773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.591189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.591199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.591588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.591600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.591869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.591879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.592106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.592119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.592496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.592508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.592949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.592961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.593422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.593434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.593641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.593651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.594041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.594051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.594453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.594464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.594884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.594895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.595296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.595307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.595699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.595709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.596089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.596099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.596488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.596499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.596915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.596925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.597326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.597337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.597655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.597667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.598045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.598056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.598374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.598385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.598770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.598781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.599173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.599184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.599581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.599592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.599992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.600003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.600140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.600150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.600406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.600418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.600841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.600851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.601240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-06-08 01:01:22.601251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-06-08 01:01:22.601744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.601782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.602171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.602184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.603156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.603180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.603544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.603557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.604334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.604356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.604761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.604774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.605192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.605204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.605635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.605648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.606032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.606043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.606428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.606441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.606848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.606860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.607283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.607294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.607696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.607707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.608096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.608107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.608494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.608504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.608933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.608943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.609350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.609361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.609848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.609860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.610240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.610250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.610628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.610640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.611007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.611017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.611415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.611428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.611831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.611843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.612227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.612238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.612506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.612517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.612909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.612920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.613363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.613374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.613713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.613724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.614038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.614048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.614451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.614462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.614644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.614654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.615042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.615053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.615463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.615474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.615853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-06-08 01:01:22.615863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-06-08 01:01:22.616089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.616099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.616489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.616501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.616858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.616869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.617257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.617267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.617655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.617666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.618060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.618071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.618334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.618345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.618805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.618816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.619201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.619212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.619671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.619682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.619972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.619984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.620375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.620386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.620785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.620796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.621158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.621168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.621366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.621378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.621769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.621781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.622256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.622267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.622770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.622808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.623268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.623281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.623672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.623684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.624062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.624074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.624453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.624464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.624937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.624949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.625340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.625351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.625763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.625775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.625998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.626011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.626300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.626311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.626684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.626695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.627084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.627095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.627523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.627535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.627942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.627953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.628203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.628215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.628607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.628617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.629010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.629021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.629434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.629447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.629807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.629819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.630203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.630214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.630602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.630613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-06-08 01:01:22.630995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-06-08 01:01:22.631006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.631259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.631269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.631661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.631673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.632042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.632057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.632159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.632169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.632540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.632551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.632951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.632962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.633394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.633410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.633806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.633818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.634209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.634219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.634529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.634540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.634955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.634966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.635352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.635363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.635757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.635769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.636021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.636032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.636409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.636421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.636567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.636577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.636957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.636969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.637350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.637361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.637741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.637752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.638133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.638144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.638529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.638540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.638934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.638945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.639330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.639340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.639729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.639741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.639941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.639953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.640352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.640362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.640617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.640628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.641038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.641050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.641477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.641487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.641573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.641582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.641972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.641982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.642369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.642381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.642773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.642785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.643058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.643070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.643467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.643478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.643886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.643897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.644290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.644301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-06-08 01:01:22.644499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-06-08 01:01:22.644509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.644783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.644793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.645070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.645080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.645346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.645356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.645718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.645729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.646113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.646124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.646495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.646507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.646874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.646886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.647164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.647175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.647561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.647571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.647650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.647662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.648039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.648050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.648442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.648453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.648847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.648858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.649236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.649248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.649683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.649694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.650014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.650025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.650327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.650339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.650752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.650763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.651148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.651158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.651409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.651420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.651801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.651812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.652059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.652070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.652450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.652462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.652847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.652858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.653241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.653252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.653538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.653549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.653955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.653965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.654368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.654379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.654776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.654788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.655176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.655188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.655577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.655588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.655935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.655946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.656330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.656343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.656722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.656732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.657110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.657121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.657487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.657498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.657878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.657888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.658272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-06-08 01:01:22.658284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-06-08 01:01:22.658673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.658685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.659092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.659104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.659490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.659501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.659886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.659896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.660194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.660204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.660596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.660606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.661042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.661053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.661428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.661440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.661849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.661860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.662255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.662266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.662646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.662657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.662946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.662958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.663400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.663424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.663871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.663882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.664258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.664269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.664752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.664790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.665109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.665122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.665624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.665661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.666054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.666068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.666457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.666470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.666788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.666799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.667192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.667207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.667448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.667459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.667851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.667861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.668133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.668145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.668551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.668561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.668926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.668937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.669320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.669330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.669609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.669620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.670025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.670036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.670417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.670429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.670812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-06-08 01:01:22.670822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-06-08 01:01:22.671203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.671214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.671610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.671621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.671966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.671977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.672361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.672371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.672758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.672769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.672980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.672990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.673389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.673400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.673800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.673811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.674090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.674099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.674456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.674467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.674893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.674904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.675292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.675303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.675709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.675720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.676123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.676134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.676418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.676429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.676834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.676844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.677215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.677225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.677634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.677645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.678067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.678077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.678460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.678470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.678870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.678881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.679289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.679300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.679659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.679671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.680085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.680096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.680478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.680490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.680867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.680878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.681260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.681270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.681653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.681665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.682048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.682059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.682326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.682337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.682727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.682738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.682981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.682993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.683427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.683438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.683690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.683701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.684083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.684094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.684490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.684500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.684904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.684915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.685318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.685329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-06-08 01:01:22.685714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-06-08 01:01:22.685725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.686107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.686118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.686578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.686589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.686967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.686978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.687363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.687374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.687762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.687773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.688156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.688168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.688534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.688545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.688833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.688844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.689228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.689238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.689623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.689634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.690042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.690053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.690436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.690447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.690730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.690741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.691125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.691135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.691509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.691521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.691859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.691870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.692264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.692274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.692677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.692688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.693082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.693095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.693478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.693489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.693869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.693880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.694130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.694141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.694364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.694377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.694797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.694808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.695097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.695110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.695493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.695504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.695876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.695886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.696270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.696280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.696656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.696668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.697124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.697135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.697509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.697519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.697907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.697918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.698309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.698319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.698759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.698769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.699143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.699153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.699541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.699552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.699934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.699945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.700334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.700344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-06-08 01:01:22.700758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-06-08 01:01:22.700770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.701154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.701165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.701472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.701484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.701912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.701924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.702339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.702350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.702736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.702747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.702948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.702960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.703135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.703149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.703527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.703537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.703915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.703925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.704307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.704318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.704725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.704736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.705155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.705165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.705632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.705643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.706018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.706030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.706415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.706427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.706814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.706825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.707207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.707217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.707490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.707501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.707912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.707922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.708324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.708335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.708728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.708739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.709124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.709134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.709521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.709532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.709891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.709901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.710285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.710296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.710699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.710710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.711092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.711104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.711507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.711518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.711939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.711950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.712330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.712341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.712729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.712739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.713149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.713160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.713613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.713624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.714004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.714017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.714230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.714242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.714635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.714647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.715031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.715042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.715282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.715292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-06-08 01:01:22.715701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-06-08 01:01:22.715712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.716116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.716127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.716501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.716512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.716816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.716826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.717061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.717072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.717375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.717386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.717757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.717769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.718159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.718170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.718554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.718565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.718973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.718983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.719390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.719400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.719810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.719820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.720274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.720285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.720664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.720675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.721085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.721096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.721479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.721490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.721873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.721884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.722264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.722276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.722678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.722689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.723073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.723083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.723465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.723475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.723875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.723885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.724132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.724143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.724523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.724534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.724916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.724927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.725237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.725247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.725649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.725660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.726040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.726051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.726439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.726451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.726859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.726871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.727273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.727283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.727691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.727702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.728102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.728113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.728372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.728382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.728636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.728647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.729032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.729043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.729429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.729440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.729733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.729743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.730106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.730116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-06-08 01:01:22.730522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-06-08 01:01:22.730533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.730971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.730982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.731356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.731368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.731768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.731779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.732164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.732174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.732550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.732561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.732753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.732764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.733148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.733160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.733374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.733386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.733818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.733829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.734215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.734226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.734633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.734644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.735030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.735041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.735424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.735435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.735809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.735821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.736231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.736242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.736530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.736541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.736939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.736950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.737282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.737293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.737584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.737594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.737979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.737990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.738370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.738380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.738757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.738769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.739170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.739183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.739587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.739600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.739988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.739999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.740454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.740465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.740837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.740848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.741096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.741106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.741234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.741245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.741548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.741558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.741933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.741944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.742326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-06-08 01:01:22.742337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-06-08 01:01:22.742720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.742732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.743156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.743168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.743572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.743583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.743967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.743977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.744363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.744374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.744771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.744783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.745184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.745195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.745627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.745637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.746018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.746028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.746246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.746256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.746443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.746453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.746836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.746847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.747297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.747308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.747684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.747695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.748099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.748109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.748492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.748503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.748938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.748948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.749323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.749334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.749718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.749732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.750113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.750124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.750370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.750382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.750628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.750638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.751043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.751055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.751439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.751450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.751853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.751863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.752296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.752307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.752606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.752616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.753006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.753016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.753397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.753411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.753777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.753788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.754194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.754205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.754588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.754600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.754985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.754996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.755378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.755389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.755763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.755775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.756142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.756154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.756629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.756668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.757095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.757108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-06-08 01:01:22.757489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-06-08 01:01:22.757500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.757885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.757896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.758286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.758298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.758711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.758722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.759127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.759138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.759521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.759532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.759931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.759942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.760248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.760259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.760647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.760659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.761044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.761054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.761441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.761451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.761804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.761815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.762219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.762230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-06-08 01:01:22.762617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-06-08 01:01:22.762629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.763013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.763026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.763409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.763422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.763796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.763807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.764026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.764040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.764440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.764451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.764852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.764864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.765270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.765280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.765654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.765667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.766068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.766079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.766465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.766476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.766867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.766878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.767150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.767160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.767554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.767565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.767950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.767961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.768359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.768370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.768583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.768595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.768987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.783 [2024-06-08 01:01:22.768997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.783 qpair failed and we were unable to recover it. 00:36:04.783 [2024-06-08 01:01:22.769397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.769421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.769812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.769823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.770208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.770219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.770660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.770671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.771058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.771068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.771456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.771467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.771864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.771875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.772032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.772042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.772429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.772440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.772698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.772709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.773112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.773122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.773510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.773521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.773941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.773951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.774368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.774378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.774772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.774783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.775172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.775182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.775609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.775620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.775935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.775947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.776255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.776266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.776639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.776650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.777044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.777057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.777430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.777439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.777829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.777839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.778226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.778235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.778524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.778534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.778771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.778780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.779116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.779126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.779510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.779520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.779900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.779910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.780307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.780316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.780699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.780709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.781088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.781099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.781484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.781494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.781880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.781889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.782276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.782286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.782717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.782726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.783119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.783129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.783536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.784 [2024-06-08 01:01:22.783546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.784 qpair failed and we were unable to recover it. 00:36:04.784 [2024-06-08 01:01:22.783934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.783943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.784306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.784315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.784708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.784717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.785142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.785152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.785515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.785525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.785927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.785936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.786320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.786331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.786774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.786783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.787148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.787158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.787549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.787559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.787959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.787968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.788373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.788382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.788773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.788783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.789169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.789178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.789539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.789548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.790029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.790038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.790283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.790294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.790547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.790557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.790940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.790950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.791288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.791299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.791596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.791605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.791884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.791893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.792337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.792346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.792617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.792626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.793013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.793022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.793286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.793295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.793553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.793563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.793980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.793990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.794378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.794389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.794780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.794790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.795163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.795174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.795576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.795586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.795946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.795956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.796238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.796250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.796630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.796640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.797007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.797016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.797379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.797388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.797763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.785 [2024-06-08 01:01:22.797773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.785 qpair failed and we were unable to recover it. 00:36:04.785 [2024-06-08 01:01:22.798164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.798173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.798574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.798583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.798790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.798799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.799219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.799228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.799433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.799443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.799751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.799761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.800147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.800156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.800532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.800541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.800830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.800839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.801205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.801215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.801510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.801520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.801730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.801742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.802117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.802126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.802490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.802500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.802900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.802909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.803297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.803307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.803779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.803789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.804156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.804166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.804551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.804560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.804812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.804822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.805210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.805220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.805540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.805550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.805934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.805943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.806299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.806309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.806699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.806709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.807118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.807127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.807515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.807525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.807822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.807832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.808204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.808213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.808466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.808476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.808866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.808875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.809234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.809243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.809635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.809644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.810063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.810073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.810458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.810470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.810866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.810875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.811239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.811248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.811621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.811631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.786 [2024-06-08 01:01:22.812013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.786 [2024-06-08 01:01:22.812023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.786 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.812404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.812414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.812783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.812793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.813165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.813174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.813662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.813699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.814121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.814133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.814386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.814397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.814803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.814813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.815174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.815183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.815387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.815398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.815666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.815676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.816039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.816048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.816333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.816342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.816688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.816699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.817066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.817076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.817484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.817494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.817877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.817886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.818096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.818107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.818307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.818318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.818690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.818701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.819157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.819167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.819452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.819462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.819715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.819725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.820000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.820017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.820407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.820416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.820873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.820885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.821251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.821261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.821644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.821655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.822038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.822047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.822416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.822426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.822796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.822805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.823189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.823199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.823581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.823590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.823976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.823985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.824247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.824256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.824674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.824683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.825045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.825054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.825430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.825440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.825742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.787 [2024-06-08 01:01:22.825752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.787 qpair failed and we were unable to recover it. 00:36:04.787 [2024-06-08 01:01:22.826000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.826009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.826405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.826415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.826784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.826793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.827154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.827163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.827527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.827537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.827936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.827945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.828353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.828362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.828746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.828755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.829161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.829170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.829528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.829538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.829945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.829955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.830332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.830341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.830732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.830743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.830991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.831004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.831408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.831417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.831786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.831795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.832185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.832196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.832625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.832634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.832994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.833004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.833391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.833405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.833607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.833617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.833957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.833968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.834242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.834252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.834652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.834663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.835026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.835035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.835405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.835415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.788 qpair failed and we were unable to recover it. 00:36:04.788 [2024-06-08 01:01:22.835800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.788 [2024-06-08 01:01:22.835809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.836099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.836109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.836475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.836485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.836704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.836713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.837032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.837041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.837405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.837414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.837844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.837853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.838237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.838247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.838719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.838729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.838967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.838977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.839385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.839394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.839784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.839793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.840156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.840165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.840379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.840388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.840690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.840699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.841113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.841123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.841614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.841652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.841930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.841944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.842352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.842362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.842733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.842743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.843104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.843114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.843498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.843508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.843916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.843927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.844308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.844318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.844607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.844617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.845043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.845052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.845376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.845385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.845772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.845782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.846139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.846149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.846354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.846365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.846748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.846758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.847118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.847127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.847491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.847501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.847888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.847898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.848266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.848275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.848492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.848502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.848826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.848836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.849161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.849171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.789 [2024-06-08 01:01:22.849486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.789 [2024-06-08 01:01:22.849496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.789 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.849877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.849886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.850290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.850299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.850674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.850684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.851055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.851064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.851465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.851474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.851861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.851870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.852361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.852370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.852773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.852784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.853032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.853043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.853291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.853302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.853709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.853719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.854115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.854124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.854527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.854537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.854894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.854904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.855289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.855298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.855716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.855725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.856090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.856103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.856488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.856498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.856859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.856869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.857257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.857268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.857651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.857661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.858045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.858054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.858505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.858515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.858881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.858890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.859163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.859172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.859546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.859555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.859791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.859800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.860160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.860169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.860453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.860463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.860573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.860583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.860934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.860943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.861321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.861330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.861777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.861787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.862150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.862160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.862564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.862574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.862903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.862912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.863282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.863291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.863678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.863687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.790 qpair failed and we were unable to recover it. 00:36:04.790 [2024-06-08 01:01:22.864092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.790 [2024-06-08 01:01:22.864102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.864492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.864503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.864810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.864819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.865284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.865295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.865503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.865515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.865890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.865902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.866311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.866321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.866711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.866722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.866992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.867002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.867400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.867414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.867787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.867797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.868156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.868166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.868547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.868557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.868852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.868861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.869126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.869135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.869535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.869544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.869922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.869931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.870332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.870341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.870716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.870726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.871087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.871096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.871463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.871474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.871862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.871871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.872117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.872127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.872353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.872362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.872757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.872767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.873111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.873121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.873488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.873498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.873886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.873896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.874288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.874298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.874696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.874706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.875148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.875158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.875548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.875558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.875835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.875846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.876103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.876112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.876316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.876328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.876684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.876693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.877055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.877064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.877343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.877353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.877725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.791 [2024-06-08 01:01:22.877735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.791 qpair failed and we were unable to recover it. 00:36:04.791 [2024-06-08 01:01:22.877984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.877993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.878378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.878387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.878589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.878599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.878922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.878931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.879206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.879217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.879501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.879511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.879891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.879902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.880276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.880286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.880666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.880675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.881059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.881069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.881450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.881460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.881711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.881722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.882080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.882091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.882476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.882487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.882902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.882912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.883326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.883336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.883721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.883731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.884094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.884104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.884455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.884465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.884850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.884859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.885226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.885235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.885527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.885537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.885921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.885930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.886295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.886304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.886692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.886701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.887063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.887072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.887488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.887498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.887921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.887931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.888158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.888168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.888540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.888550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.888867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.888884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.889137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.889148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.792 qpair failed and we were unable to recover it. 00:36:04.792 [2024-06-08 01:01:22.889513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.792 [2024-06-08 01:01:22.889522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.889981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.889990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.890354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.890363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.890775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.890785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.891147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.891157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.891473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.891484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.891886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.891895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.892302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.892311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.892678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.892688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.893074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.893083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.893464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.893474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.893856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.893866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.894136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.894146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.894520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.894530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.894916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.894926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.895334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.895344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.895775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.895785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.896179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.896189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.896588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.896598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.897042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.897051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.897468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.897478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.897677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.897686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.898084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.898093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.898460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.898470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.898864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.898873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.899261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.899270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.899673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.899683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.900108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.900118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.900549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.900559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.900964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.900976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.901198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.901208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.901596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.901606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.901975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.901985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.902299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.902308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.902715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.902725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.903093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.903103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.903489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.903499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.903883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.903892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.904255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.793 [2024-06-08 01:01:22.904264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.793 qpair failed and we were unable to recover it. 00:36:04.793 [2024-06-08 01:01:22.904613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.904623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.904890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.904899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.905287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.905296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.905737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.905747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.906116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.906125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.906490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.906500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.906890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.906900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.907283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.907293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.907697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.907707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.908067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.908077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.908479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.908488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.908848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.908858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.909269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.909279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.909537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.909547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.909949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.909959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.910267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.910277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.910487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.910497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.910778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.910789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.911111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.911120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.911522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.911532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.911944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.911953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.912212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.912221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.912619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.912628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.912987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.912997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.913363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.913372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.913737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.913748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.914136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.914145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.914497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.914507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.914810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.914819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.915202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.915211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.915435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.915446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.915823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.915833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.916226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.916236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.916637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.916647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.917010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.917020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.917418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.917429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.917798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.794 [2024-06-08 01:01:22.917808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.794 qpair failed and we were unable to recover it. 00:36:04.794 [2024-06-08 01:01:22.918122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.918133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.918535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.918545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.918881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.918890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.919270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.919279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.919649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.919659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.920045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.920055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.920382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.920391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.920816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.920826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.921076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.921085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.921451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.921461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.921852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.921862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.922269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.922278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.922661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.922670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.923067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.923077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.923322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.923333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.923725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.923735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.924101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.924112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.924590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.924599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.924973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.924982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.925344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.925353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.925761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.925771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.926134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.926143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.926514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.926524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.926941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.926951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.927337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.927347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.927751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.927762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.928146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.928156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.928557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.928575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.928921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.928932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.929315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.929325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.929705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.929714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.930081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.930090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.930475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.930485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.930865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.930874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.931285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.931296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.931698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.931708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.932000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.932009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.795 [2024-06-08 01:01:22.932411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.795 [2024-06-08 01:01:22.932421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.795 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.932693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.932703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.933107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.933117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.933488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.933498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.933747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.933757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.934175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.934184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.934549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.934559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.934849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.934859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.935221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.935231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.935644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.935654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.936064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.936074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.936437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.936450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.936782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.936791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.937177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.937186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.937557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.937567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.937950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.937959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.938317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.938326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.938706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.938715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.939115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.939124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.939511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.939521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.939912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.939921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.940304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.940314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.940628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.940638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.941018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.941027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.941388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.941397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.941779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.941790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.942166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.942176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.942537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.942546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.942781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.942790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.943188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.943197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.943562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.943572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.943988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.943997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.944252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.944261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.944590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.944600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.944964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.944974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.945255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.945264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.945648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.945658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.946055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.946065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.796 [2024-06-08 01:01:22.946357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.796 [2024-06-08 01:01:22.946368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.796 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.946580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.946590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.946937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.946947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.947332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.947342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.947720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.947730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.948106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.948115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.948503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.948513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.948868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.948878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.949154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.949164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.949533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.949543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.949945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.949955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.950328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.950337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.950726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.950735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.950936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.950945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.951328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.951338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.951707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.951716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.952087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.952096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.952459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.952469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.952858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.952868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.953251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.953260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.953625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.953634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.954027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.954037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.954422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.954432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.954827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.954836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.955204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.955213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.955516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.955526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.955764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.955773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.956159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.956171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.956570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.956581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.956966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.797 [2024-06-08 01:01:22.956976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.797 qpair failed and we were unable to recover it. 00:36:04.797 [2024-06-08 01:01:22.957195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.957204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.957598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.957607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.957972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.957982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.958377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.958387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.958774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.958783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.959184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.959194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.959565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.959574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.959955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.959964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.960325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.960335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.960723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.960733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.961137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.961147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.961577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.961587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.961951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.961960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.962319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.962327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.962702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.962711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.963075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.963085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.963444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.963453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.963820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.963829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.964129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.964138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.964511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.964521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.964915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.964924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.965141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.965152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.965388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.965399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.965762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.965772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.966190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.966198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.966558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.966567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.966946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.966956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.967347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.967357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.967776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.967786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.968232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.968242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.968718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.968755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.969181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.969193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.969663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.969700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.970119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.970131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.970491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.970502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.970765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.970776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.971157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.971167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.971527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.971537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.971961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.798 [2024-06-08 01:01:22.971972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.798 qpair failed and we were unable to recover it. 00:36:04.798 [2024-06-08 01:01:22.972253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.972263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.972620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.972630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.973019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.973029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.973426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.973437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.973815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.973824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.974182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.974192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.974651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.974661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.975035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.975045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.975411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.975421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.975869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.975879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.976166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.976176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.976678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.976715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.977043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.977055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.977453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.977464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.977817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.977827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.978191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.978202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.978563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.978573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.978962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.978972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.979360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.979369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.979733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.979743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.980118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.980128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.980518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.980528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.980912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.980922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.981336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.981345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.981720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.981730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.982115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.982124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.982481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.982496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.982907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.982916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.983277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.983287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.983646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.983657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.983951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.983960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.984320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.984329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.984702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.984712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.985119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.985129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.985565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.985575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.985956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.985965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.986440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.986450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.986824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.986833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.799 qpair failed and we were unable to recover it. 00:36:04.799 [2024-06-08 01:01:22.987193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.799 [2024-06-08 01:01:22.987203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.987576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.987586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.987973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.987983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.988358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.988367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.988731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.988741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.989004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.989013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.989387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.989396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.989855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.989865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.990240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.990249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.990689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.990726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.991156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.991168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.991648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.991685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.992033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.992045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.992436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.992448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.992839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.992848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.993138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.993152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.993472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.993482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.993847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.993856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.994247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.994257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.994576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.994586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.994992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.995001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.995405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.995415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.995869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.995878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.996236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.996246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.996713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.996750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.997159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.997171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.997662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.997699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.998118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.998130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.998487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.998499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.998906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.998916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.999378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.999388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:22.999770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:22.999780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:23.000187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:23.000197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:23.000692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:23.000729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:23.001147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:23.001159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:23.001669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:23.001707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:23.001952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:23.001966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:23.002375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:23.002385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:23.002795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.800 [2024-06-08 01:01:23.002805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.800 qpair failed and we were unable to recover it. 00:36:04.800 [2024-06-08 01:01:23.003005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.003016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.003406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.003416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.003645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.003654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.003999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.004008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.004394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.004408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.004766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.004775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.005179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.005189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.005723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.005760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.006114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.006125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.006488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.006499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.006769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.006779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.007170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.007180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.007539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.007549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.007965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.007975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.008355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.008364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.008748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.008758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.009161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.009171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.009543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.009554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.009926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.009936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.010219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.010235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.010624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.010634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.010999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.011008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.011370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.011379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.011747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.011756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.012117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.012126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.012492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.012502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.012854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.012863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.013247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.013256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.013634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.013644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.014029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.014040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.014438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.014448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.014835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.014845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.015230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.015239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.015625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.015635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.016042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.016051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.016358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.016368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.016755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.016765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.017125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.017134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.017536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.801 [2024-06-08 01:01:23.017546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.801 qpair failed and we were unable to recover it. 00:36:04.801 [2024-06-08 01:01:23.018022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.018031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.018394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.018408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.018760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.018769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.019133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.019142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.019626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.019663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.020080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.020098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.020483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.020493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.020871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.020881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.021266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.021275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.021680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.021690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.022054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.022063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.022430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.022441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.022639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.022652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.023057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.023067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.023430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.023440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.023710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.023720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.024105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.024114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.024387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.024396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.024752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.024761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.025125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.025135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.025448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.025459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.025836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.025846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.026208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.026217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.026585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.026595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.026983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.026993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.027375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.027384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.027781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.027791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.028241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.028251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.028717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.028755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.029183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.029194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.029688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.029725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.030131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.030143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.802 qpair failed and we were unable to recover it. 00:36:04.802 [2024-06-08 01:01:23.030510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.802 [2024-06-08 01:01:23.030525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.030998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.031008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.031407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.031418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.031793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.031802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.032189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.032200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.032684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.032721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.033176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.033188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.033669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.033706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.034132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.034144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.034659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.034696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.035109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.035121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.035628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.035665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.036093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.036105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.036513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.036524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.036925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.036935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.037308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.037319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.037717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.037727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.038151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.038160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.038529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.038539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.038957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.038967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.039392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.039407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.039799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.039809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.040167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.040176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.040621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.040658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.041059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.041070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.041455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.041466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.041859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.041870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.042277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.042290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.042660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.042671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.043125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.043134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.043521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.043530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.043895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.043905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.044114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.044126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.044526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.044536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.044924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.044934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.045302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.045311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.045697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.045707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.046087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.803 [2024-06-08 01:01:23.046098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.803 qpair failed and we were unable to recover it. 00:36:04.803 [2024-06-08 01:01:23.046424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.046433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.046864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.046874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.047238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.047247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.047701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.047711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.048080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.048090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.048503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.048514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.048896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.048906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.049292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.049301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.049698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.049708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.050065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.050075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.050458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.050468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.050851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.050861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.051242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.051251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.051687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.051697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.052056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.052065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.052447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.052457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.052645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.052654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.052852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.052862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.053162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.053171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.053621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.053631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.053992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.054001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.054363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.054372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.054636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.054645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.055054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.055065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:04.804 [2024-06-08 01:01:23.055450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.804 [2024-06-08 01:01:23.055460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:04.804 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.056341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.056364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.056741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.056753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.057120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.057130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.057408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.057419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.057773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.057782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.058143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.058155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.058523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.058533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.058806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.058816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.059138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.059147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.059534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.059544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.059929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.059939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.060319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.060328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.060708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.060718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.061051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.061061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.061446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.061456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.061793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.061803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.062174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.062184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.062617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.062627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.063032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.063041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.063409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.063419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.063792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.063801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.064196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.064206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.064494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.064504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.064814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.064824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.065075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.065085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.065448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.065459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.065822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.065831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.066191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.066200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.066563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.066573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.066906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.066915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.067307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.067317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.067578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.067588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.067994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.068006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.068206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.068217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.068601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.068611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.068971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.068980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.069382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.069391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.069603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.069614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.069892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.069902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.070302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.070312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.070693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.070703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.071069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.077 [2024-06-08 01:01:23.071078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.077 qpair failed and we were unable to recover it. 00:36:05.077 [2024-06-08 01:01:23.071470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.071479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.071852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.071861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.072233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.072243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.072638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.072649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.073079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.073089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.073449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.073458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.073827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.073836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.074220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.074230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.074585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.074595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.074985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.074994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.075360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.075370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.075770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.075780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.076139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.076149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.076564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.076574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.076973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.076982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.077265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.077282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.077687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.077697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.078064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.078075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.078442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.078452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.078831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.078841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.079228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.079237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.079620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.079630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.079996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.080005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.080373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.080382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.080770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.080780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.081075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.081085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.081464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.081474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.081829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.081847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.082233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.082242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.082600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.082610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.082988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.082998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.083380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.083390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.083792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.083802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.084074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.084085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.084409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.084419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.084798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.084807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.085201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.085210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.085711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.085748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.086162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.086174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.086663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.086699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.087116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.087128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.087623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.087659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.088019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.088031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.088383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.088393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.088823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.088833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.089208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.089218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.089712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.089750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.090205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.090217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.090689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.090726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.091140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.091151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.091639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.091676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.092093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.092105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.092385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.092396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.092785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.092795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.078 qpair failed and we were unable to recover it. 00:36:05.078 [2024-06-08 01:01:23.093193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.078 [2024-06-08 01:01:23.093203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.093703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.093740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.094157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.094169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.094645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.094682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.095089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.095101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.095463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.095473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.095863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.095873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.096267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.096277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.096676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.096686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.097064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.097075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.097459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.097469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.097817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.097827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.098255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.098264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.098526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.098536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.098922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.098931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.099297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.099307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.099693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.099704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 677401 Killed "${NVMF_APP[@]}" "$@" 00:36:05.079 [2024-06-08 01:01:23.100094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.100106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.100498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.100509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:05.079 [2024-06-08 01:01:23.100911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.100921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:05.079 [2024-06-08 01:01:23.101324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.101334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.079 [2024-06-08 01:01:23.101720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.101731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.102131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.102141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.102551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.102562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.102851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.102860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.103247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.103257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.103679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.103689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.104032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.104041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.104408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.104418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.104788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.104799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.105159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.105168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.105605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.105642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.105924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.105938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.106322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.106335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.106729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.106741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.107111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.107123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.107419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.107432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.107784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.107796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.108186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.108198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.108726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.108765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=678372 00:36:05.079 [2024-06-08 01:01:23.109171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.109185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.109416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.109431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 678372 00:36:05.079 [2024-06-08 01:01:23.109683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.109695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 678372 ']' 00:36:05.079 [2024-06-08 01:01:23.110101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.110114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.079 [2024-06-08 01:01:23.110631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.110669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:05.079 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.079 [2024-06-08 01:01:23.111066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.111080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.079 [2024-06-08 01:01:23.111475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.079 [2024-06-08 01:01:23.111488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.079 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.111883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.111894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.112271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.112283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.112708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.112720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.112999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.113011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.113423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.113439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.113820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.113832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.114219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.114231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.114621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.114634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.115020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.115033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.115456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.115469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.115853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.115865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.116253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.116264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.116707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.116719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.117097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.117109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.117495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.117507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.117809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.117821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.118210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.118221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.118568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.118581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.118963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.118975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.119341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.119353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.119738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.119750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.119968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.119982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.120376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.120388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.120813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.120825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.121221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.121232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.121618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.121629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.122017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.122028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.122341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.122353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.122730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.122743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.123122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.123133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.123416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.123428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.123851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.123866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.124250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.124262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.124698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.124736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.125145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.125159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.125636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.125674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.126069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.126082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.126629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.126668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.127026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.127039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.127434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.127446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.127704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.127715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.128110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.128121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.128508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.128519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.128926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.128936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.129294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.080 [2024-06-08 01:01:23.129304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.080 qpair failed and we were unable to recover it. 00:36:05.080 [2024-06-08 01:01:23.129552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.129563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.129912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.129925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.130322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.130333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.130794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.130806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.131185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.131195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.131457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.131468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.131933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.131944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.132329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.132339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.132730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.132742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.133115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.133125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.133510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.133521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.133909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.133920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.134323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.134334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.134852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.134864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.135297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.135308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.135800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.135812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.136220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.136231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.136751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.136790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.137091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.137104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.137434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.137447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.137625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.137639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.138050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.138060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.138493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.138505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.138814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.138824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.139229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.139240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.140439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.140464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.140854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.140866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.141281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.141292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.141769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.141781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.142161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.142171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.142507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.142519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.142910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.142921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.143329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.143340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.143716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.143727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.144117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.144128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.144513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.144523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.144941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.144952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.145354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.145364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.145696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.145708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.146099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.146109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.146458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.146468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.146849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.146860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.147202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.147212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.147605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.147615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.148007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.148017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.148410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.148421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.148809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.148820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.149091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.149101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.149489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.149500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.149791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.149801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.150183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.150193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.150597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.081 [2024-06-08 01:01:23.150608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.081 qpair failed and we were unable to recover it. 00:36:05.081 [2024-06-08 01:01:23.151068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.151079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.151459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.151471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.151859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.151872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.152162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.152173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.152572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.152583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.152971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.152981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.153380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.153391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.153791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.153802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.154044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.154055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.154444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.154455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.154846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.154857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.155107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.155117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.155569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.155581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.155972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.155983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.156374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.156384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.156773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.156785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.157211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.157222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.157611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.157622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.158049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.158060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.158447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.158458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.158846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.158857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.159045] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:36:05.082 [2024-06-08 01:01:23.159088] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:05.082 [2024-06-08 01:01:23.159282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.159293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.159679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.159689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.160085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.160096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.160358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.160368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.160623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.160634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.161077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.161088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.161478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.161490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.161874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.161888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.162279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.162290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.162702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.162714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.163108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.163120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.163512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.163525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.163916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.163927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.164313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.164325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.164716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.164728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.165113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.165125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.165512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.165524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.166002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.166012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.166391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.166408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.166796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.166807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.167192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.167204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.167699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.167738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.168137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.168151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.168575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.168587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.168793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.168805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.169159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.169171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.169568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.169580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.169992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.170004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.082 qpair failed and we were unable to recover it. 00:36:05.082 [2024-06-08 01:01:23.170390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.082 [2024-06-08 01:01:23.170408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.170778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.170790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.171109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.171120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.171523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.171535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.171924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.171937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.172190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.172201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.172590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.172607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.173015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.173027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.173412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.173424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.173762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.173774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.174159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.174171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.174547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.174558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.174947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.174957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.175336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.175347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.175632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.175643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.176053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.176063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.176527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.176538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.176927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.176938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.177193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.177203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.177588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.177599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.177972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.177982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.178363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.178373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.178752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.178764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.179169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.179179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.179569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.179581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.179966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.179977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.180360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.180371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.180711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.180723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.181113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.181125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.181380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.181391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.181786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.181797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.182207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.182217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.182699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.182738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.183134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.183152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.183668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.183706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.183918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.183931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.184145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.184156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.184548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.184559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.184955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.184965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.185373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.185384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.185782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.185794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.186083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.186094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.186496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.186507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.186888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.186898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.187285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.187296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.187677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.187688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.187939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.083 [2024-06-08 01:01:23.187951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.083 qpair failed and we were unable to recover it. 00:36:05.083 [2024-06-08 01:01:23.188360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.188372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.188581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.188593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.188919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.188930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.189131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.189141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.189493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.189503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.189914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.189925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.190309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.190320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 EAL: No free 2048 kB hugepages reported on node 1 00:36:05.084 [2024-06-08 01:01:23.190712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.190723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.191103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.191114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.191511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.191522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.191906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.191917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.192302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.192314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.192596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.192607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.193000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.193014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.193400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.193428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.193819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.193831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.194215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.194226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.194615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.194626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.194944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.194955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.195352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.195363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.195764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.195775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.196152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.196163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.196547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.196558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.196949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.196960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.197275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.197286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.197660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.197671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.197876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.197886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.198212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.198223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.198556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.198568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.198939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.198950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.199167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.199177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.199632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.199643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.200047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.200057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.200447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.200458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.200872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.200883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.201273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.201285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.201672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.201682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.202069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.202080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.202370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.202383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.202776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.202787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.203201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.203213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.203607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.203618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.204008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.204019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.204405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.204417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.204799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.204810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.205195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.205206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.205617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.205654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.206053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.206067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.206394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.206413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.206804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.206814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.207034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.207047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.084 [2024-06-08 01:01:23.207241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.084 [2024-06-08 01:01:23.207252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.084 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.207526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.207538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.207934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.207945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.208313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.208325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.208726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.208737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.209143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.209154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.209538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.209549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.209929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.209939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.210397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.210413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.210726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.210739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.211143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.211154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.211539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.211551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.211880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.211891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.212258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.212269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.212654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.212665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.213046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.213056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.213434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.213445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.213824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.213834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.214219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.214230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.214617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.214628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.215014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.215024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.215440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.215451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.215809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.215819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.216216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.216226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.216614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.216625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.217030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.217041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.217432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.217443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.217829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.217839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.218300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.218310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.218684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.218696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.219085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.219099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.219489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.219501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.219920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.219932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.220347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.220357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.220746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.220756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.221076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.221087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.221335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.221347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.221606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.221617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.222007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.222018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.222407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.222418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.222784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.222795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.223203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.223214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.223595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.223606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.223992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.224004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.224317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.224328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.224631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.224642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.225022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.225032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.225517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.225527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.225913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.225923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.226294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.085 [2024-06-08 01:01:23.226304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.085 qpair failed and we were unable to recover it. 00:36:05.085 [2024-06-08 01:01:23.226696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.226708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.226997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.227008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.227411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.227422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.227719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.227730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.228082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.228094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.228489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.228501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.228916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.228927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.229337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.229351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.229741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.229752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.230140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.230151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.230538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.230549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.230876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.230888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.231281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.231293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.231509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.231520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.231709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.231720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.231989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.232001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.232392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.232412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.232782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.232794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.233180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.233192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.233591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.233604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.233855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.233868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.234258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.234270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.234501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.234513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.234884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.234894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.235276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.235288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.235669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.235681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.236066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.236077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.236493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.236505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.236816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.236828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.237129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.237140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.237347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.237358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.237612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.237624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.238006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.238017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.238416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.238428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.238804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.238816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.239226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.239237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.239620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.239632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.240026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.240037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.240424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.240436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.240570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:05.086 [2024-06-08 01:01:23.240901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.240912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.241305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.241316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.241728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.241740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.242127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.242138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.242591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.242602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.242987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.242997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.243407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.243419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.243781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.243791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.244202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.244213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.244697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.244737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.245111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.245126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.245345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.245356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.245760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.245773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.246027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.246038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.086 [2024-06-08 01:01:23.246431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.086 [2024-06-08 01:01:23.246442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.086 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.246853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.246863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.247227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.247238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.247630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.247641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.248038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.248049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.248455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.248466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.248851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.248862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.249171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.249182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.249570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.249585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.249972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.249983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.250352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.250363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.250716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.250728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.251122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.251132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.251549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.251560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.251811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.251823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.252207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.252218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.252603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.252614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.253016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.253027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.253422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.253434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.253648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.253658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.254052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.254062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.254455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.254467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.254876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.254887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.255271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.255282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.255693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.255704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.256091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.256102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.256544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.256555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.256764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.256774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.256974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.256986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.257351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.257361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.257778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.257789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.258178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.258188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.258579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.258591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.258980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.258990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.259284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.259295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.259702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.259715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.260098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.260110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.260516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.260527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.260883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.260893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.261281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.261292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.261695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.261707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.262092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.262102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.262481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.262493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.262887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.262899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.263286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.263298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.263698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.263710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.087 [2024-06-08 01:01:23.263921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.087 [2024-06-08 01:01:23.263931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.087 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.264313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.264324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.264789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.264800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.265177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.265188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.265410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.265423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.265743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.265754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.266153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.266163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.266545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.266557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.266919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.266930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.267170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.267182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.267512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.267523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.267923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.267934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.268311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.268322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.268720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.268731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.269118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.269129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.269587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.269598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.269979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.269990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.270380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.270391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.270779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.270791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.271059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.271070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.271364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.271376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.271879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.271892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.272277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.272288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.272698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.272710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.273113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.273124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.273497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.273509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.273912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.273924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.274199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.274210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.274677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.274690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.275072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.275083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.275405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.275417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.275858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.275870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.276284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.276295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.276669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.276681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.276965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.276978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.277227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.277238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.277622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.277634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.278027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.278038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.278416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.278427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.278826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.278837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.279243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.279254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.279543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.279554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.279829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.279840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.280224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.280235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.280613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.280626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.281010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.281022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.281409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.281421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.281830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.281840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.282256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.282267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.282740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.088 [2024-06-08 01:01:23.282779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.088 qpair failed and we were unable to recover it. 00:36:05.088 [2024-06-08 01:01:23.283175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.283188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.283699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.283737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.284149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.284162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.284415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.284427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.284692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.284703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.285095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.285106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.285622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.285661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.286042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.286060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.286454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.286466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.286778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.286790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.287174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.287184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.287592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.287603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.287986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.287997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.288382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.288393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.288780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.288792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.289114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.289125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.289516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.289528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.289884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.289896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.290260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.290272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.290645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.290656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.290980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.290991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.291385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.291396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.291782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.291793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.292159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.292170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.292648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.292686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.293080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.293093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.293503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.293516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.293907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.293918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.294308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.294320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.294721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.294733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.295119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.295129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.295548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.295559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.295943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.295954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.296336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.296347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.296727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.296743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.297128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.297139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.297486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.297497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.297901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.297912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.298322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.298332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.298720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.298731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.299117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.299127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.299513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.299524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.299878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.299890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.300269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.300280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.300687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.300698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.301080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.301091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.301528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.301539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.301937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.301947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.302339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.302350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.302744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.302755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.303166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.303176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.303373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.303384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.303835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.303847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.304242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.089 [2024-06-08 01:01:23.304253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.089 qpair failed and we were unable to recover it. 00:36:05.089 [2024-06-08 01:01:23.304729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.304766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.305009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.305023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.305325] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:05.090 [2024-06-08 01:01:23.305354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:05.090 [2024-06-08 01:01:23.305361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:05.090 [2024-06-08 01:01:23.305367] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:05.090 [2024-06-08 01:01:23.305373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:05.090 [2024-06-08 01:01:23.305419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.305431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.305510] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:36:05.090 [2024-06-08 01:01:23.305644] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:36:05.090 [2024-06-08 01:01:23.305782] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:36:05.090 [2024-06-08 01:01:23.305814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.305824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.305784] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:36:05.090 [2024-06-08 01:01:23.306258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.306273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.306651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.306662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.306998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.307008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.307395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.307411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.307783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.307793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.308179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.308191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.308679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.308718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.309114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.309127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.309538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.309550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.309901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.309912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.310294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.310305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.310588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.310600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.311015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.311025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.311412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.311423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.311889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.311900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.312156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.312166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.312507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.312517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.312924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.312935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.313259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.313270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.313645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.313656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.313986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.313997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.314410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.314421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.314733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.314745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.315031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.315041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.315352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.315363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.315749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.315760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.316147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.316158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.316546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.316559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.316930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.316940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.317326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.317337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.317697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.317709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.317982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.317993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.318368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.318378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.318763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.318773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.319159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.319169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.319553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.319565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.320013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.320024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.320406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.320417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.320800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.090 [2024-06-08 01:01:23.320810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.090 qpair failed and we were unable to recover it. 00:36:05.090 [2024-06-08 01:01:23.321198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.321209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.321710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.321749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.321991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.322003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.322444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.322456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.322865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.322876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.323296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.323306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.323512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.323526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.323831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.323842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.324218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.324229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.324581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.324592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.324981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.324991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.325231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.325242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.325629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.325641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.326056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.326066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.326443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.326454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.326839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.326850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.327237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.327247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.327650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.327661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.328051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.328062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.328446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.328457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.328765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.328776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.329027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.329040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.329416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.329427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.329828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.329838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.330224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.330234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.330507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.330517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.330817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.330828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.331219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.331229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.331617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.331628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.332032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.332043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.332434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.332445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.332838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.332849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.333255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.333265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.333667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.333678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.334106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.334116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.334501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.334512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.334903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.334913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.335320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.335330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.335541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.335552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.335844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.335855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.336259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.336269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.336653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.336665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.336948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.336959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.337192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.337203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.337590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.337601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.337975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.337985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.338364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.338375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.338759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.338770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.339155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.339166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.339589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.339601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.339985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.339995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.340388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.091 [2024-06-08 01:01:23.340399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.091 qpair failed and we were unable to recover it. 00:36:05.091 [2024-06-08 01:01:23.340744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.340755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.341167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.341178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.341629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.341668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.342062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.342075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.342333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.342349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.342728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.342739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.343131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.343141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.343534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.343545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.343953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.343964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.344282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.344293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.344702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.344713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.345100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.345111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.345470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.345483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.345881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.345892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.346103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.346114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.346513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.346523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.346725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.346735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.346959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.346970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.347370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.347380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.347769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.347779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.092 [2024-06-08 01:01:23.348167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.092 [2024-06-08 01:01:23.348178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.092 qpair failed and we were unable to recover it. 00:36:05.364 [2024-06-08 01:01:23.348589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.364 [2024-06-08 01:01:23.348602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.364 qpair failed and we were unable to recover it. 00:36:05.364 [2024-06-08 01:01:23.348995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.364 [2024-06-08 01:01:23.349006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.364 qpair failed and we were unable to recover it. 00:36:05.364 [2024-06-08 01:01:23.349400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.364 [2024-06-08 01:01:23.349422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.364 qpair failed and we were unable to recover it. 00:36:05.364 [2024-06-08 01:01:23.349874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.364 [2024-06-08 01:01:23.349885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.364 qpair failed and we were unable to recover it. 00:36:05.364 [2024-06-08 01:01:23.350092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.364 [2024-06-08 01:01:23.350102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.364 qpair failed and we were unable to recover it. 00:36:05.364 [2024-06-08 01:01:23.350400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.364 [2024-06-08 01:01:23.350417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.364 qpair failed and we were unable to recover it. 00:36:05.364 [2024-06-08 01:01:23.350811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.350821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.351077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.351089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.351510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.351521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.351844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.351855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.352138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.352150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.352572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.352583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.352819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.352829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.353153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.353164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.353593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.353604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.353810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.353820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.354087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.354100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.354378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.354388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.354794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.354806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.355184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.355194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.355591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.355602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.355994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.356004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.356393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.356408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.356783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.356793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.357024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.357034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.357467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.357478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.357874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.357885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.358102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.358113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.358547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.358558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.358946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.358956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.359346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.359356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.359753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.359764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.360023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.360034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.360247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.360258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.360735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.360746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.361153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.361164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.361427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.361438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.361811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.361825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.362161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.362173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.362443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.365 [2024-06-08 01:01:23.362454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.365 qpair failed and we were unable to recover it. 00:36:05.365 [2024-06-08 01:01:23.362850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.362861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.363246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.363257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.363506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.363517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.363914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.363925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.364003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.364013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.364274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.364284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.364676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.364686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.365082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.365092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.365348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.365359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.365778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.365789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.366177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.366188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.366597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.366609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.366999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.367011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.367395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.367410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.367728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.367738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.367943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.367954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.368333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.368344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.368770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.368781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.369167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.369179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.369589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.369600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.369927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.369937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.370338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.370350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.370747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.370758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.371144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.371154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.371439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.371450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.371847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.371858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.372244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.372255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.372466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.372479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.372863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.372873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.373280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.373292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.373700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.373711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.373927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.373937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.374138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.374149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.374500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.374512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.374776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.374786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.375176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.366 [2024-06-08 01:01:23.375186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.366 qpair failed and we were unable to recover it. 00:36:05.366 [2024-06-08 01:01:23.375590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.375600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.376009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.376020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.376485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.376497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.376905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.376915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.377302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.377314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.377689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.377699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.377989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.377999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.378338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.378350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.378578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.378591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.378782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.378792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.379110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.379122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.379509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.379520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.379905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.379916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.380324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.380336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.380669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.380681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.380980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.380991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.381380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.381391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.381833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.381844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.382062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.382072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.382466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.382476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.382861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.382871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.383128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.383139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.383580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.383591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.383885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.383895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.384131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.384141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.384519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.384530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.384789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.384799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.385053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.385064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.385443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.385454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.385831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.385844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.386231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.386241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.386628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.386639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.387028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.387039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.387449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.387460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.387683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.387693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.387982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.387992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.367 [2024-06-08 01:01:23.388198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.367 [2024-06-08 01:01:23.388208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.367 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.388599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.388610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.389000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.389012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.389304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.389315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.389721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.389732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.389983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.389994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.390377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.390387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.390792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.390804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.391191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.391202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.391618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.391628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.392020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.392031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.392240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.392250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.392503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.392514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.392900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.392911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.393306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.393316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.393635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.393645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.394087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.394097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.394355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.394365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.394604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.394615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.394877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.394888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.395276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.395289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.395692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.395703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.396088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.396099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.396490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.396501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.396914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.396924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.397202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.397214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.397686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.397697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.398083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.398094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.398481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.398492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.398916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.398926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.399309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.399319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.399709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.399719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.400121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.400132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.400553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.400564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.400995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.401007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.401412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.401424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.401812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.401822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.402234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.368 [2024-06-08 01:01:23.402244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.368 qpair failed and we were unable to recover it. 00:36:05.368 [2024-06-08 01:01:23.402494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.402504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.402899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.402909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.403227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.403237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.403612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.403623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.403879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.403889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.404280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.404291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.404749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.404760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.405010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.405020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.405416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.405428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.405500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.405509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.405739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.405750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.406146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.406156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.406568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.406579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.406786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.406796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.407194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.407205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.407666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.407677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.407897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.407907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.408270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.408281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.408667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.408678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.409068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.409078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.409365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.409376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.409741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.409752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.369 [2024-06-08 01:01:23.410139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.369 [2024-06-08 01:01:23.410149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.369 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.410528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.410539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.410839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.410850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.411180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.411190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.411598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.411609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.411916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.411928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.412314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.412324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.412599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.412610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.412862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.412872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.413259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.413269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.413676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.413687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.414160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.414170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.414555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.414566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.414953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.414964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.415378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.415388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.415778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.415790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.416179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.416190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.416652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.416690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.417161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.417174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.417670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.417708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.417927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.417940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.418339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.418350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.418759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.418771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.419159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.419169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.419559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.419571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.419797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.419807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.420016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.420027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.420425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.420436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.420651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.420666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.421069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.421080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.421291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.421301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.421665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.421677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.421933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.421943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.422371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.422381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.422647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.422658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.370 [2024-06-08 01:01:23.423059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.370 [2024-06-08 01:01:23.423069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.370 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.423455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.423466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.423836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.423847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.424256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.424266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.424490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.424501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.424903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.424914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.425167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.425179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.425572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.425583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.425972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.425982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.426365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.426375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.426796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.426810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.427213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.427225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.427546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.427557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.427964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.427974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.428363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.428374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.428766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.428779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.429181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.429193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.429576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.429587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.429975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.429985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.430392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.430406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.430807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.430820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.431206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.431216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.431698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.431736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.432143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.432156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.432318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.432328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.432709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.432722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.432944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.432956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.433378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.433390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.433578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.433590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.433962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.433973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.434360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.434370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.434777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.434788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.435215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.435226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.435447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.435458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.435699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.435710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.435968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.435978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.436377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.371 [2024-06-08 01:01:23.436387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.371 qpair failed and we were unable to recover it. 00:36:05.371 [2024-06-08 01:01:23.436785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.436797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.437013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.437023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.437296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.437306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.437686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.437698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.438082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.438092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.438345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.438355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.438738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.438749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.439140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.439150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.439536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.439548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.439827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.439838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.440213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.440228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.440616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.440628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.441014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.441024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.441418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.441429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.441724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.441735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.442132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.442142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.442370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.442385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.442801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.442812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.443002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.443013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.443410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.443421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.443692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.443702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.443920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.443930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.444211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.444221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.444605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.444615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.445006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.445017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.445395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.445414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.445805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.445815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.445892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.445902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.446072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.446082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.446370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.446380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.446735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.446746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.447157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.447168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.447634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.447645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.447861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.447872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.448265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.448275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.448664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.448675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.449006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.449016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.372 [2024-06-08 01:01:23.449221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.372 [2024-06-08 01:01:23.449230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.372 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.449623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.449634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.450047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.450058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.450270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.450283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.450658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.450669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.450888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.450898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.451290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.451300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.451703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.451714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.452152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.452163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.452413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.452424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.452814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.452825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.453209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.453220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.453613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.453624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.454012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.454023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.454432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.454443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.454833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.454844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.455217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.455229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.455614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.455624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.456032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.456043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.456503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.456515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.456938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.456949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.457337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.457347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.457636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.457649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.458015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.458026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.458398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.458412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.458779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.458789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.459202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.459212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.459663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.459701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.459911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.459924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.460296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.460306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.460499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.460511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.460906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.460917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.461303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.461313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.461755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.373 [2024-06-08 01:01:23.461767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.373 qpair failed and we were unable to recover it. 00:36:05.373 [2024-06-08 01:01:23.462176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.462187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.462572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.462583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.462787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.462797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.463186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.463197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.463604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.463615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.464001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.464012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.464262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.464272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.464683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.464697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.465112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.465123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.465306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.465317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.465703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.465715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.466104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.466114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.466522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.466533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.467007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.467018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.467409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.467420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.467783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.467794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.468073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.468083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.468333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.468343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.468808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.468818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.469043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.469056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.469438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.469450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.469832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.469842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.470273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.470283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.470653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.470664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.471044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.471054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.471441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.471452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.471842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.471853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.374 qpair failed and we were unable to recover it. 00:36:05.374 [2024-06-08 01:01:23.472238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.374 [2024-06-08 01:01:23.472249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.472478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.472490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.472709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.472720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.472942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.472953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.473348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.473358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.473770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.473781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.473864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.473873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.474168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.474180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.474425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.474436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.474811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.474821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.475037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.475048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.375 [2024-06-08 01:01:23.475423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.375 [2024-06-08 01:01:23.475434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.375 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.475871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.475882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.476269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.476280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.476677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.476689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.477100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.477110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.477507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.477518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.477758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.477768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.478084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.478093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.478371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.478382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.478679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.478691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.479075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.479085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.479425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.479436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.479818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.479829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.480256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.480266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.480647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.480658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.481044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.481054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.481467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.481478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.481864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.481875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.482259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.482270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.482678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.482689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.482910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.482920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.483302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.483312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.483723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.483733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.484132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.484144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.484549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.484560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.484836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.484846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.485231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.485241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.485626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.485637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.486054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.486065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.486135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.486145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.486420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.486431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.486502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.486511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.486881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.486891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.487205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.487216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.487610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.487621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.488005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.488015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.488408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.376 [2024-06-08 01:01:23.488419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.376 qpair failed and we were unable to recover it. 00:36:05.376 [2024-06-08 01:01:23.488616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.488627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.489024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.489034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.489421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.489432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.489810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.489820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.489889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.489898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.490245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.490256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.490476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.490488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.490853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.490863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.491249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.491260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.491551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.491563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.491945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.491955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.492349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.492359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.492741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.492751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.492969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.492979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.493263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.493273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.493635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.493646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.493920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.493930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.494317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.494327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.494708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.494718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.494987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.494997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.495390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.495404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.495786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.495796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.496191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.496202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.496589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.496599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.497001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.497011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.497455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.497466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.497881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.497891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.497960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.497970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.498201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.498211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.498420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.498430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.498639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.498650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.498868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.498881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.499286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.499296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.499638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.499648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.500037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.500048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.500261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.377 [2024-06-08 01:01:23.500271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.377 qpair failed and we were unable to recover it. 00:36:05.377 [2024-06-08 01:01:23.500665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.500676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.501070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.501081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.501467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.501478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.501874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.501884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.502334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.502345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.502732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.502743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.503125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.503135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.503547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.503558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.503826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.503837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.504222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.504233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.504623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.504633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.505036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.505047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.505432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.505443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.505833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.505843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.506222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.506232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.506620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.506631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.506842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.506853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.507250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.507260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.507465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.507478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.507872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.507883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.508271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.508282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.508690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.508701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.508922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.508933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.509184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.509195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.509400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.509415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.509794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.509805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.510193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.510203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.510508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.510521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.510920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.510930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.511315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.511325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.511717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.511727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.512143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.512153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.512547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.512557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.512783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.512793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.513176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.513187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.513601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.378 [2024-06-08 01:01:23.513612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.378 qpair failed and we were unable to recover it. 00:36:05.378 [2024-06-08 01:01:23.513997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.514008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.514215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.514224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.514606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.514618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.514975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.514987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.515302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.515312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.515631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.515641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.516027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.516037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.516451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.516462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.516887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.516897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.517283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.517295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.517545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.517555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.517966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.517976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.518363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.518374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.518780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.518791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.519197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.519207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.519617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.519628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.519878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.519889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.520359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.520370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.520593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.520604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.521010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.521020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.521409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.521421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.521606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.521617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.521826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.521836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.522221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.522232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.522661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.522672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.522880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.522889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.523289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.523299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.523507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.523517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.523921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.523931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.524365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.524375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.524753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.524765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.525175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.525186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.525406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.525417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.525769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.525780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.526167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.526178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.526588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.526599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.526681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.526689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.527033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.379 [2024-06-08 01:01:23.527044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.379 qpair failed and we were unable to recover it. 00:36:05.379 [2024-06-08 01:01:23.527435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.527447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.527723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.527733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.527982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.527992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.528399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.528414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.528805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.528816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.529201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.529211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.529630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.529641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.530028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.530038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.530430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.530441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.530653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.530663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.530935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.530945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.531329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.531340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.531654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.531666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.532073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.532084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.532408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.532419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.532778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.532788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.533175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.533186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.533601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.533611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.534021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.534031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.534418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.534429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.534834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.534844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.535236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.535247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.535659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.535670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.536101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.536111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.536543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.536554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.536762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.536772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.537146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.537156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.537534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.537545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.380 [2024-06-08 01:01:23.537931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.380 [2024-06-08 01:01:23.537942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.380 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.538334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.538345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.538513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.538524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.538772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.538782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.539215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.539225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.539613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.539624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.539914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.539925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.540273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.540283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.540700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.540711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.541099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.541109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.541519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.541530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.541933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.541947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.542332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.542342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.542730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.542741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.543160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.543170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.543357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.543367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.543760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.543772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.544236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.544246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.544629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.544640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.545029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.545039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.545426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.545436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.545846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.545856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.546261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.546271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.546478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.546489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.546695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.546706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.547050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.547060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.547383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.547394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.547829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.547840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.548227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.548238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.548493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.548503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.548756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.548767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.549020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.549030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.549420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.549430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.549647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.549658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.550062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.550072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.550251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.550262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.550619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.381 [2024-06-08 01:01:23.550629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.381 qpair failed and we were unable to recover it. 00:36:05.381 [2024-06-08 01:01:23.551051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.551062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.551431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.551447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.551911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.551921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.552302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.552312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.552722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.552733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.552954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.552964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.553358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.553368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.553753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.553764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.553967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.553977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.554357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.554367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.554764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.554775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.555204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.555214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.555602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.555614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.555833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.555843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.556219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.556230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.556626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.556637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.557023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.557034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.557443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.557453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.557787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.557798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.558171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.558181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.558601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.558612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.559012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.559022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.559417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.559428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.559880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.559892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.560145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.560155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.560576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.560587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.560979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.560990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.561211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.561222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.561572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.561584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.561912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.561922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.562317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.562327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.562722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.562733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.563008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.563018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.563427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.563437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.563752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.382 [2024-06-08 01:01:23.563762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.382 qpair failed and we were unable to recover it. 00:36:05.382 [2024-06-08 01:01:23.564057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.564067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.564359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.564368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.564779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.564789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.565176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.565186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.565389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.565399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.565800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.565810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.566217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.566227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.566713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.566751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.567136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.567148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.567541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.567553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.567916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.567927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.568314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.568325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.568586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.568598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.569000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.569010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.569092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.569101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.569491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.569502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.569919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.569929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.570316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.570327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.570632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.570643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.571059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.571069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.571462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.571473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.571693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.571706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.572096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.572107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.572314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.572324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.572672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.572683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.573073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.573083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.573469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.573481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.573866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.573877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.574096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.574106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.574527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.574538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.574932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.574942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.575353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.575364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.575759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.575771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.383 [2024-06-08 01:01:23.576155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.383 [2024-06-08 01:01:23.576166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.383 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.576551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.576565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.576920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.576930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.577319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.577331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.577718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.577729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.578161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.578172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.578380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.578392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.578781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.578792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.579229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.579239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.579610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.579648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.580083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.580096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.580526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.580537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.580607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.580615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.580888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.580898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.581282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.581292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.581698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.581709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.582094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.582105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.582491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.582502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.582978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.582988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.583212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.583222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.583599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.583610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.583995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.584006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.584393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.584407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.584794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.584804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.585195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.585205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.585618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.585629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.585844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.585854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.586250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.586261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.586345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.384 [2024-06-08 01:01:23.586358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.384 qpair failed and we were unable to recover it. 00:36:05.384 [2024-06-08 01:01:23.586578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1088e30 is same with the state(5) to be set 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Write completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Write completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Write completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.384 Read completed with error (sct=0, sc=8) 00:36:05.384 starting I/O failed 00:36:05.385 Read completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 Write completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 Write completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 Write completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 Write completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 Read completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 Write completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 Write completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 Read completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 Read completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 Read completed with error (sct=0, sc=8) 00:36:05.385 starting I/O failed 00:36:05.385 [2024-06-08 01:01:23.587518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.385 [2024-06-08 01:01:23.587935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.587981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbf4c000b90 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.588406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.588420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.588853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.588891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.589183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.589197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.589701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.589739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.590138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.590154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.590671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.590709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.591137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.591150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.591659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.591697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.591901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.591913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.592343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.592355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.592659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.592671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.592810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.592820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.593190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.593201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.593366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.593378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.593793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.593805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.594215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.594226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.594629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.594640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.595014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.595025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.595435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.595447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.595757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.595768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.596162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.596172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.596472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.596484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.596836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.596846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.597267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.597277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.597666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.597677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.385 [2024-06-08 01:01:23.598067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.385 [2024-06-08 01:01:23.598077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.385 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.598494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.598505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.598743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.598754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.599038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.599048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.599419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.599430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.599847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.599859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.600245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.600255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.600667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.600678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.601063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.601074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.601588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.601626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.602037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.602050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.602440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.602452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.602875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.602886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.603308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.603319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.603729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.603741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.604032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.604043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.604255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.604269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.604646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.604658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.605046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.605057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.605262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.605272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.605473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.605488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.605857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.605868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.606277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.606289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.606680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.606692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.606987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.606998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.607416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.607428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.607639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.607650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.608102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.608113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.608363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.608375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.608762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.608774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.609151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.609165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.609390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.609406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.609785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.609795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.610089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.610101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.610512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.610523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.610813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.610824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.386 [2024-06-08 01:01:23.611285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.386 [2024-06-08 01:01:23.611297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.386 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.611671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.611683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.612071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.612083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.612531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.612543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.612915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.612926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.613315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.613326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.613766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.613777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.614186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.614197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.614448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.614458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.614876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.614886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.615201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.615211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.615618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.615631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.616022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.616033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.616419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.616430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.616642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.616652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.617016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.617026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.617475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.617486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.617895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.617906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.618290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.618300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.618714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.618724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.619110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.619122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.619387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.619398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.619769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.619781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.620161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.620173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.620398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.620416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.620795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.620806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.621190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.621201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.621684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.621723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.622045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.622059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.622414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.622427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.622843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.622855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.623113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.623125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.623516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.623528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.623921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.623933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.624313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.624324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.624795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.624807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.625196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.625207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.387 [2024-06-08 01:01:23.625635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.387 [2024-06-08 01:01:23.625647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.387 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.626033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.626049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.626438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.626450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.626755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.626767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.627154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.627165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.627461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.627473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.627867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.627880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.628300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.628311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.628598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.628611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.629002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.629013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.629304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.629315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.629686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.629698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.629862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.629873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.630172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.630186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.630415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.630427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.630815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.630828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.631026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.631038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.631449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.631460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.631845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.631855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.632067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.632078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.632488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.632499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.632902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.632913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.633302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.633313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.633696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.633707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.634123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.634134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.634421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.634432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.634631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.634641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.634941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.634952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.635343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.635353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.635581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.635592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.635790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.635801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.636164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.636175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.388 [2024-06-08 01:01:23.636283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.388 [2024-06-08 01:01:23.636293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.388 qpair failed and we were unable to recover it. 00:36:05.663 [2024-06-08 01:01:23.636686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-06-08 01:01:23.636699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-06-08 01:01:23.637089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-06-08 01:01:23.637100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-06-08 01:01:23.637491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-06-08 01:01:23.637503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-06-08 01:01:23.637886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-06-08 01:01:23.637900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-06-08 01:01:23.638108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-06-08 01:01:23.638120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-06-08 01:01:23.638413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-06-08 01:01:23.638425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-06-08 01:01:23.638814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-06-08 01:01:23.638825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-06-08 01:01:23.639215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-06-08 01:01:23.639227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-06-08 01:01:23.639438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.639449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.639833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.639844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.640230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.640241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.640714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.640725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.641136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.641147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.641371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.641382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.641758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.641770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.642141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.642152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.642375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.642387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.642790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.642802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.643057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.643069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.643453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.643466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.643729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.643741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.644145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.644157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.644366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.644377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.644709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.644720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.645054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.645065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.645380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.645390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.645583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.645595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.645779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.645789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.646190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.646200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.646398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.646413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.646637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.646648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.647094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.647105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.647512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.647524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.647742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.647752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.648098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.648108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.648364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.648374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.648641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.648654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.649034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.649046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.649455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.649466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.649860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.649871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.650263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.650274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.650693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.650703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.651112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.651124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.651504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.651515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-06-08 01:01:23.651902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-06-08 01:01:23.651913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.652167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.652177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.652599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.652610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.652999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.653009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.653420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.653432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.653638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.653648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.653854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.653864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.654244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.654255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.654649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.654659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.654931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.654944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.655352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.655364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.655804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.655815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.656105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.656116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.656455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.656466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.656741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.656756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.656958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.656968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.657330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.657340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.657600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.657611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.658001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.658012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.658417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.658433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.658848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.658858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.659247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.659258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.659557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.659568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.659953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.659964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.660344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.660355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.660740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.660750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.660947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.660955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.661303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.661314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.661697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.661708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.661994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.662006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.662391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.662405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.662790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.662800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.663091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.663101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.663350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.663361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.663603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.663614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.663990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.664001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.664387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.664397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-06-08 01:01:23.664808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-06-08 01:01:23.664819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.665023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.665033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.665427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.665438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.665827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.665837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.666223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.666234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.666623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.666634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.667043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.667054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.667438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.667449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.667690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.667699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.668064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.668077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.668455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.668466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.668730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.668740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.669156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.669167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.669447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.669459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.669693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.669704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.670089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.670100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.670485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.670496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.670711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.670721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.671104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.671114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.671499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.671510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.671918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.671928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.672341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.672353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.672692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.672703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.673052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.673063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.673478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.673489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.673875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.673886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.674136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.674147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.674551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.674562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.674814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.674824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.675006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.675017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.675414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.675425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.675805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.675816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.676019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.676029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.676159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.676168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.676588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.676599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.676982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.676993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-06-08 01:01:23.677406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-06-08 01:01:23.677417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.677782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.677793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.678173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.678183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.678586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.678597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.678956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.678966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.679224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.679234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.679583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.679595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.679886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.679897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.680285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.680296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.680507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.680517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.680855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.680865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.681253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.681264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.681670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.681681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.682071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.682082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.682300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.682312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.682695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.682707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.683090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.683100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.683484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.683495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.683724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.683736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.684120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.684131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.684383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.684394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.684751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.684762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.685037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.685048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.685439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.685450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.685668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.685678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.685938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.685948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.686387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.686397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.686780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.686791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.687183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.687193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.687417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.687428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.687626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.687636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.687983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.687994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.688208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.688219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.688612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.688623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.688827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.688837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-06-08 01:01:23.689235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-06-08 01:01:23.689246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.689461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.689471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.689881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.689891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.690285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.690297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.690692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.690702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.690905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.690915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.691322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.691336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.691541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.691552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.691897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.691908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.692294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.692305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.692694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.692705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.693089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.693100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.693319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.693330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.693695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.693706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.694091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.694103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.694480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.694491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.694867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.694877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.695262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.695272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.695643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.695655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.696097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.696107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.696493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.696504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.696908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.696919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.697327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.697338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.697556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.697567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.697984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.697995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.698400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.698414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.698672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.698682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.698888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.698898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.699151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.699161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.699567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.699578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.699969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-06-08 01:01:23.699981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-06-08 01:01:23.700379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.700391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.700813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.700824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.701216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.701229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.701639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.701650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.702037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.702049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.702352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.702363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.702788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.702799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.703185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.703196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.703685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.703723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.703946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.703958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.704352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.704363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.704752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.704764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.705029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.705040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.705439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.705449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.705839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.705849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.706141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.706152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.706409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.706420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.706645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.706655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.707017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.707028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.707410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.707422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.707853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.707864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.708070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.708080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.708475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.708488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.708908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.708920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.709337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.709348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.709601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.709612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.709903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.709913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.710313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.710324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.710572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.710583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.710976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.710987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.711398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.711413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.711784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.711794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.712185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.712196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.712597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.712608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.712998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.713010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.713396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.713411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.713796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.713808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.714194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-06-08 01:01:23.714206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-06-08 01:01:23.714691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.714730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.715141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.715155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.715683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.715722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.716117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.716130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.716578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.716590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.716807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.716818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.716903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.716913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.717068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.717079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.717478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.717489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.717741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.717751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.718139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.718150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.718532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.718543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.718819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.718829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.719217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.719227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.719693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.719703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.720085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.720095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.720316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.720326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.720532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.720544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.720803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.720813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.721060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.721070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.721362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.721372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.721568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.721581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.721981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.721992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.722407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.722418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.722601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.722611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.723056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.723068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.723477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.723488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.723888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.723898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.724214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.724225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.724615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.724626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.724945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.724956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.725362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.725372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.725788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.725802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.726187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.726198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.726617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.726629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.726885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-06-08 01:01:23.726897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-06-08 01:01:23.727143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.727154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.727547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.727558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.727937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.727947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.728340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.728350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.728737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.728748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.729168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.729178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.729559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.729570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.729917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.729928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.730337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.730348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.730737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.730749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.731162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.731173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.731591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.731602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.731992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.732003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.732252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.732263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.732520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.732531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.732930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.732941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.733328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.733338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.733630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.733642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.734039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.734049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.734438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.734449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.734823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.734834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.735218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.735229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.735592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.735602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.735978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.735991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.736393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.736420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.736804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.736815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.737020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.737029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.737216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.737227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.737488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.737500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.737692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.737702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.737946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.737956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.738333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.738344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.738723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.738734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.739119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.739129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.739515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.739526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.739796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.739806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.740052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.740062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.740318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-06-08 01:01:23.740329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-06-08 01:01:23.740684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.740696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.740988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.740998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.741398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.741414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.741814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.741825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.742211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.742222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.742611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.742622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.743031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.743042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.743460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.743470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.743860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.743870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.744312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.744322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.744702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.744713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.745101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.745112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.745515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.745528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.745736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.745747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.746143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.746154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.746564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.746576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.747008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.747019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.747239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.747249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.747620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.747631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.748015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.748025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.748411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.748422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.748819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.748829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.749221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.749231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.749300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.749308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.749390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.749399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.749674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.749684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.750067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.750078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.750465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.750476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.750886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.750897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.751277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.751287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.751673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.751684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.752090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.752101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.752490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.752501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.752908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.752918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.753325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.753335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.753551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.753561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.753955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.753965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.754381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-06-08 01:01:23.754392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-06-08 01:01:23.754774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.754784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.755170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.755180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.755561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.755573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.755952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.755963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.756182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.756193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.756589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.756601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.756989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.757000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.757371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.757381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.757770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.757781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.757996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.758006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.758223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.758233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.758624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.758634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.759023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.759034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.759405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.759416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.759796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.759807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.759966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.759979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.760393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.760410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.760746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.760758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.761143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.761153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.761540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.761551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.761962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.761972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.762360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.762370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.762841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.762852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.763188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.763199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.763690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.763730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.764155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.764167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.764617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.764653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-06-08 01:01:23.765052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-06-08 01:01:23.765064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.765271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.765282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.765706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.765717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.766098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.766108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.766501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.766511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.766892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.766901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.767269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.767279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.767667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.767677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.768041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.768050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.768422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.768432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.768828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.768838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.769048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.769059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.769258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.769271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.769723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.769733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.770143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.770154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.770519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.770531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.770906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.770915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.771277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.771286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.771529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.771539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.771940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.771949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.772395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.772409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.772788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.772797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.773222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.773231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.773532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.773542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.773750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.773762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.774175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.774185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.774599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.774609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.775039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.775048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.775438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.775448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.775834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.775843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.776248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.776257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.776463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.776473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.776912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.776921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.777307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.777316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.777702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.777712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-06-08 01:01:23.778118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-06-08 01:01:23.778127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.778540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.778550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.778919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.778928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.779212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.779221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.779424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.779434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.779827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.779836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.780122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.780131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.780537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.780549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.780822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.780832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.781115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.781124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.781324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.781333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.781527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.781537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.781790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.781800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.782157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.782166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.782376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.782385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.782545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.782554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.782938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.782948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.783411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.783421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.783771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.783781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.784183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.784193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.784510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.784519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.784941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.784950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.785354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.785363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.785749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.785759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.786166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.786176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.786580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.786590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.787012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.787022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.787233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.787242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.787649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.787659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.788104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.788114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.788521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.788530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.788848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.788858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.789069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.789078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.789353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.789363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.789782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.789792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.790005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.790014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.790265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.790274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.790642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.790652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.791071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-06-08 01:01:23.791080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-06-08 01:01:23.791509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.791518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.791890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.791899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.792306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.792316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.792685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.792694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.793097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.793107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.793499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.793509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.793797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.793807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.794181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.794191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.794593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.794602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.795007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.795017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.795386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.795396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.795782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.795792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.795999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.796011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.796195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.796206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.796446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.796456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.796698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.796707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.797102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.797112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.797528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.797538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.797915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.797924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.798337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.798346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.798720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.798729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.799169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.799178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.799552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.799562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.799771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.799780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.800053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.800062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.800483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.800494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.800698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.800709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.801142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.801151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.801564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.801574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.801979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.801989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.802394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.802407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.802811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.802820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.803025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.803034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.803285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.803294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.803707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.803717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-06-08 01:01:23.803939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-06-08 01:01:23.803948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.804354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.804369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.804781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.804791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.805193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.805202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.805610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.805620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.805998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.806008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.806416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.806426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.806869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.806878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.807133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.807143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.807368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.807377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.807622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.807631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.808064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.808073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.808341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.808350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.808763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.808773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.808985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.808994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.809330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.809339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.809798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.809808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.810205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.810214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.810600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.810610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.811028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.811037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.811450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.811459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.811842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.811852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.812056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.812065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.812449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.812458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.812884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.812894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.813282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.813291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.813507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.813517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.813837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.813846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.814300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.814311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.814509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.814518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.814932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.814941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.815362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.815370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.815758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.815768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.816173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.816183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.816429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.816439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.816806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.816815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.817059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.817068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.817296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.817305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-06-08 01:01:23.817606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-06-08 01:01:23.817616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.818021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.818030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.818439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.818449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.818845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.818854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.819264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.819273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.819705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.819714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.819927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.819936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.820299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.820308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.820594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.820604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.821002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.821011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.821224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.821233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.821549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.821559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.821983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.821993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.822384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.822393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.822798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.822808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.823232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.823242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.823680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.823691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.824079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.824088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.824625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.824663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.825089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.825101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.825301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.825310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.825700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.825711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.826126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.826136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.826384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.826394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.826568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.826579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.826983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.826992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.827179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.827189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.827453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.827464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.827739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.827750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-06-08 01:01:23.828169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-06-08 01:01:23.828179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.828575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.828585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.828872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.828883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.829085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.829095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.829434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.829445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.829659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.829668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.830027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.830037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.830445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.830455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.830840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.830849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.831254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.831263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.831677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.831686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.832098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.832107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.832513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.832523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.832999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.833009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.833399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.833414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.833784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.833794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.834206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.834215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.834721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.834758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.835188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.835200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.835743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.835779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.836207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.836219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.836774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.836811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.837238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.837250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.837747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.837783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.838209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.838221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.838731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.838768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.839013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.839024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.839109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.839122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.839378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.839390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.839837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.839851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.840060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.840071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.840342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.840351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.840622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.840633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.841059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.841068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.841151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.841159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.841543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-06-08 01:01:23.841554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-06-08 01:01:23.841934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.841944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.842357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.842366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.842747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.842757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.842967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.842978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.843329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.843338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.843717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.843726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.844133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.844142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.844341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.844350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.844544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.844554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.844994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.845003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.845417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.845427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.845828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.845838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.846220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.846229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.846646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.846656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.847074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.847083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.847636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.847672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.848082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.848094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.848556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.848567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.848785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.848794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.849085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.849095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.849542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.849556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.849797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.849807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.850201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.850210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.850584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.850594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.850938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.850948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.851336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.851345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.851554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.851564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.851765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.851774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.852123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.852132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.852494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.852503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.852894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.852903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.853310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.853320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.853715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.853726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.853924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.853934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.854123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-06-08 01:01:23.854132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-06-08 01:01:23.854492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.854503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.854902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.854912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.855114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.855123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.855518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.855528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.855746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.855758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.856160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.856169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.856581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.856591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.857018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.857028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.857240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.857249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.857603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.857613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.858036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.858046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.858438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.858448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.858856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.858868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.859077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.859087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.859554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.859564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.860022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.860031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.860371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.860380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.860817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.860826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.861101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.861112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.861316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.861325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.861695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.861705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.862098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.862108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.862323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.862332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.862549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.862560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.862999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.863008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.863418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.863428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.863879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.863889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.864266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.864276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.864714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.864724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.865132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.865142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.865568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.865579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.865876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.865886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.866278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.866288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.866522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.866532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-06-08 01:01:23.866934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-06-08 01:01:23.866943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.867198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.867207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.867493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.867502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.867901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.867911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.868321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.868331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.868714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.868724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.868924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.868933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.869434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.869444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.869659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.869671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.870080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.870090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.870398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.870415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.870847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.870857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.871271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.871281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.871488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.871499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.871718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.871728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.871969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.871979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.872187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.872198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.872288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.872298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.872669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.872680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.872880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.872890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.873087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.873097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.873494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.873505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.873930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.873940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.874326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.874336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.874724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.874734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.875151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.875160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.875566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.875577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.875980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.875990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.876240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.876249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.876686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.876696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.877103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.877113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.877519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.877529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.877913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.877923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.878347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.878356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.878739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.878749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.879114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.879124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.879521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.879531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.879934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.879944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-06-08 01:01:23.880166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-06-08 01:01:23.880175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.880636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.880646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.881076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.881085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.881331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.881340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.881593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.881604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.881862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.881873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.882122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.882131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.882555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.882565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.882809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.882821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.883096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.883105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.883521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.883531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.883956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.883966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.884356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.884365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.884770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.884780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.885197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.885206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.885645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.885655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.886055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.886065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.886473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.886483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.886854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.886864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.887072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.887081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.887433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.887444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.887646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.887655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.887871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.887880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.888200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.888210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.888601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.888612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.889023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.889032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.889417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.889427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.889850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.889860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-06-08 01:01:23.890266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-06-08 01:01:23.890275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.890676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.890687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.891103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.891112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.891589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.891625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.892000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.892012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.892324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.892334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.892780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.892791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.893005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.893019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.893298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.893308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.893706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.893716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.894125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.894134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.894546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.894556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.894968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.894978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.895389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.895398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.895818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.895828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.896234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.896243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.896751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.896787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.897191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.897204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.897640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.897677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.898106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.898118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.898412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.898423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.898579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.898589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.898790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.898800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.899014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.899024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.899429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.899439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.899656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.899665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.900085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.900094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.900487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.900498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.900925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.900935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.901126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.901136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.901527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.901537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.901941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.901950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.902366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.902375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.902776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.902785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.903203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.903212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.903621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.903631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.903720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.903729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.904006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.904015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-06-08 01:01:23.904439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-06-08 01:01:23.904450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.904871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.904881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.905296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.905306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.905743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.905753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.906160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.906170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.906546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.906556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.907000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.907010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.907436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.907446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.907663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.907673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.908080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.908090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.908186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.908195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.908584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.908595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.908977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.908986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.909396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.909409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.909802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.909811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.910224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.910233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.910676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.910686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.911085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.911094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.911500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.911510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.911874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.911883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.912296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.912306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.912739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.912749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.913153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.913162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.913563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.913573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.913983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.913992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.914431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.914441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.914819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.914828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.915026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.915036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.915430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.915440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.915811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.915821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.916047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.916057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.916333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.916343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.916563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.916573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.916849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.916859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.917053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.917062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.917298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.917307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.917377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.917385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.917800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.917811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.918189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-06-08 01:01:23.918198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-06-08 01:01:23.918589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.918599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.918916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.918925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.919336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.919345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.919775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.919784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.920176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.920185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.920588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.920598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.921077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.921087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.921461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.921471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.921912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.921922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.922312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.922321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.922587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.922597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.922838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.922850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.923121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.923130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.923554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.923563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.923768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.923777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.924058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.924068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.924542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.924552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.924810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.924819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.925240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.925249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.925592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.925602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.925797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.925806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.926164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.926173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.926385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.926394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.926769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.926779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.927116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.927125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.927543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.927556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.927865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.927875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.928081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.928090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.928287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.928298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.928506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.928515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.928910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.928920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.929290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.929299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.929691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.929701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.930060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.930069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.930383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.930392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.930629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.930639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-06-08 01:01:23.930862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-06-08 01:01:23.930871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.687 [2024-06-08 01:01:23.931266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-06-08 01:01:23.931276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-06-08 01:01:23.931685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-06-08 01:01:23.931694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-06-08 01:01:23.932064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-06-08 01:01:23.932073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-06-08 01:01:23.932423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-06-08 01:01:23.932434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-06-08 01:01:23.932837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-06-08 01:01:23.932846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:05.687 [2024-06-08 01:01:23.933287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-06-08 01:01:23.933297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.957 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:36:05.957 [2024-06-08 01:01:23.933735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-06-08 01:01:23.933746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:05.957 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:05.957 [2024-06-08 01:01:23.934136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-06-08 01:01:23.934146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.957 [2024-06-08 01:01:23.934522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-06-08 01:01:23.934532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-06-08 01:01:23.934963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-06-08 01:01:23.934973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-06-08 01:01:23.935356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-06-08 01:01:23.935367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-06-08 01:01:23.935626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-06-08 01:01:23.935636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-06-08 01:01:23.935877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-06-08 01:01:23.935887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-06-08 01:01:23.936275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-06-08 01:01:23.936288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.936724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.936736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.937128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.937138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.937439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.937451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.937841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.937852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.938228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.938239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.938453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.938463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.938753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.938764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.938970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.938980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.939240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.939249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.939659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.939669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.939869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.939879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.940298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.940308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.940516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.940526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.940893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.940903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.941314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.941323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.941507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.941518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.941938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.941947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.942298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.942308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.942709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.942720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.943149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.943159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.943550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.943560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.943955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.943965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.944378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.944387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.944800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.944810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.945205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.945215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.945636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.945647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.946057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.946066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.946368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.946377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.946514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.946523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.946972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.946982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.947194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.947204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.947573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.947583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.947816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.947827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.948231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.948241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.948698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.948708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-06-08 01:01:23.949114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-06-08 01:01:23.949123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.949493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.949503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.949888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.949898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.950291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.950301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.950741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.950751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.951148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.951163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.951629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.951640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.951848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.951858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.952067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.952076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.952465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.952476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.952711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.952720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.953127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.953136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.953292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.953302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.953577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.953587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.953974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.953984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.954188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.954201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.954537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.954548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.954961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.954970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.955334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.955344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.955814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.955824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.956197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.956207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.956485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.956495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.956846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.956857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.957241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.957252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.957493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.957503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.957919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.957929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.958334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.958344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.958748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.958758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.959136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.959145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.959554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.959564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.959993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.960003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.960379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.960388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.960803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.960815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.961230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.961240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.961741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.961779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.962262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.962273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.962756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.962794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-06-08 01:01:23.963118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-06-08 01:01:23.963130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.963666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.963703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.963797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.963808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.964106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.964118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.964363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.964372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.964775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.964785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.965207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.965216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.965623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.965633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.966040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.966050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.966448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.966459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.966878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.966887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.967304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.967313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.967696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.967707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.968061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.968071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.968473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.968482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.968878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.968888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.969295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.969305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.969597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.969607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.970048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.970057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.970474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.970484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.970936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.970945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.971352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.971361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.971767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.971779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.971979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.971989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.972386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.972396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.972806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.972816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.960 [2024-06-08 01:01:23.973124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.973145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:05.960 [2024-06-08 01:01:23.973555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.973566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.973809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.973819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.960 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.960 [2024-06-08 01:01:23.974225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.974235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.974620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.974630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.974841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.974851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.975205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.975214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.975621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.975631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.976036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.976045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.976452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.976463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-06-08 01:01:23.976838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-06-08 01:01:23.976848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.977255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.977264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.977521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.977532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.977993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.978002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.978427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.978436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.978833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.978842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.979246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.979255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.979441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.979451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.979891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.979900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.980182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.980191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.980561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.980571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.980949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.980959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.981351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.981360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.981535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.981545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.981775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.981785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.982130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.982140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.982441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.982450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.982751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.982761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.983175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.983185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.983539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.983549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.983979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.983988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.984369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.984378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.984756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.984766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.985183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.985192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.985558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.985569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.985965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.985977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.986346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.986355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.986756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.986766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.987017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.987026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.987393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.987406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.987679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.987688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.988102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.988112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 Malloc0 00:36:05.961 [2024-06-08 01:01:23.988488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.988498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-06-08 01:01:23.988862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.988871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.961 [2024-06-08 01:01:23.989284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.989295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:05.961 [2024-06-08 01:01:23.989763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.989773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.961 [2024-06-08 01:01:23.989985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.989995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 01:01:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.961 [2024-06-08 01:01:23.990255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-06-08 01:01:23.990266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.990651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.990662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.991050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.991060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.991427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.991438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.991720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.991730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.991878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.991887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.992249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.992259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.992570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.992581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.992983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.992993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.993366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.993375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.993744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.993761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.994090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.994099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.994378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.994387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.994788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.994798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.995166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.995175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.995429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.995438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.995763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.995773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.995938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.962 [2024-06-08 01:01:23.996017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.996027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.996323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.996332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.996742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.996753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.997141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.997150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.997602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.997612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.997837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.997847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.998231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.998241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.998608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.998618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.999033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.999043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.999457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.999467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:23.999830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:23.999839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:24.000281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:24.000290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:24.000665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:24.000675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-06-08 01:01:24.001063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-06-08 01:01:24.001072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.001453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.001464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.001864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.001874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.002144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.002154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.002508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.002517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.002787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.002796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.003021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.003035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.003379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.003388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.003770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.003780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.004181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.004190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.004395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.004414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.004650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.004660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.963 [2024-06-08 01:01:24.005057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.005066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.005153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.005161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:05.963 [2024-06-08 01:01:24.005524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.005534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.963 [2024-06-08 01:01:24.005764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.005773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.963 [2024-06-08 01:01:24.006229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.006239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.006648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.006659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.006863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.006873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.007221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.007230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.007616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.007625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.007981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.007991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.008381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.008393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.008777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.008786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.009189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.009199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.009590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.009600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.009987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.009996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.010388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.010397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.010778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.010787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.011026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.011035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.011287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.011298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.011506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.011516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.011960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.011970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.012346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.012355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.012576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.012585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.013034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.013043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.013270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.013279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-06-08 01:01:24.013678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-06-08 01:01:24.013688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.014101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.014110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.014355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.014364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.014651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.014661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.015034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.015043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.015365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.015375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.015764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.015774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.016142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.016151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.016430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.016439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.016648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.016660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.964 [2024-06-08 01:01:24.017090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.017100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:05.964 [2024-06-08 01:01:24.017327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.017339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.017517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.017527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.964 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.964 [2024-06-08 01:01:24.017875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.017886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.018307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.018317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.018741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.018751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.019203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.019212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.019594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.019604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.019974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.019984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.020219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.020229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.020620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.020630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.020874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.020883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.021149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.021159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.021578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.021588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.021975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.021986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.022393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.022405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.022795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.022805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.023096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.023105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.023539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.023549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.023805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.023814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.024224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.024234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.024466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.024476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.024732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.024741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.025104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.025113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.025416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.025426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.025826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.025835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.026221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.026230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-06-08 01:01:24.026619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-06-08 01:01:24.026629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.027036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.027046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.027432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.027442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.027860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.027869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.028134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.028143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.028544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.028554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.028953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.028963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.965 [2024-06-08 01:01:24.029348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.029357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:05.965 [2024-06-08 01:01:24.029732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.029743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.965 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.965 [2024-06-08 01:01:24.030143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.030153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.030523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.030533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.030909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.030919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.031383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.031392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.031679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.031690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.032107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.032116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.032191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.032200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.032492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.032501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.032899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.032909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.033198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.033207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.033592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.033602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.033792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.033801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.034220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.034229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.034519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.034528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.034794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.034803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.035186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.035195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.035602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.035612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.035990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-06-08 01:01:24.036000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107b270 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.036205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.965 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.965 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:05.965 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.965 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.965 [2024-06-08 01:01:24.046782] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.965 [2024-06-08 01:01:24.046883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.965 [2024-06-08 01:01:24.046902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.965 [2024-06-08 01:01:24.046910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.965 [2024-06-08 01:01:24.046916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.965 [2024-06-08 01:01:24.046936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.965 01:01:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 677535 00:36:05.965 [2024-06-08 01:01:24.056849] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.965 [2024-06-08 01:01:24.056931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.965 [2024-06-08 01:01:24.056948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.965 [2024-06-08 01:01:24.056955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.965 [2024-06-08 01:01:24.056961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.965 [2024-06-08 01:01:24.056976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-06-08 01:01:24.066750] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.066832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.066849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.066856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.066862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.066877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.076725] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.076808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.076827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.076834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.076840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.076854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.086764] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.086849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.086866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.086873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.086879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.086894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.096774] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.096852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.096869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.096875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.096881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.096895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.106780] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.106854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.106871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.106877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.106883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.106897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.116784] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.116952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.116969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.116976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.116987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.117001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.126817] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.126902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.126919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.126926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.126931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.126946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.136842] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.136919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.136935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.136942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.136948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.136962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.146872] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.146947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.146963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.146970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.146976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.146989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.156895] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.156980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.157004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.157013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.157020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.157038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.166972] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.167073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.167099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.167108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.167114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.167133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.177015] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.177100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.177117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.177125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.177131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.177146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.186993] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.187076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.187093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.187100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.966 [2024-06-08 01:01:24.187106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.966 [2024-06-08 01:01:24.187120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-06-08 01:01:24.197000] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.966 [2024-06-08 01:01:24.197133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.966 [2024-06-08 01:01:24.197158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.966 [2024-06-08 01:01:24.197167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.967 [2024-06-08 01:01:24.197173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.967 [2024-06-08 01:01:24.197192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-06-08 01:01:24.207043] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.967 [2024-06-08 01:01:24.207130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.967 [2024-06-08 01:01:24.207154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.967 [2024-06-08 01:01:24.207163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.967 [2024-06-08 01:01:24.207173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.967 [2024-06-08 01:01:24.207193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-06-08 01:01:24.217066] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.967 [2024-06-08 01:01:24.217145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.967 [2024-06-08 01:01:24.217169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.967 [2024-06-08 01:01:24.217178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.967 [2024-06-08 01:01:24.217184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.967 [2024-06-08 01:01:24.217203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-06-08 01:01:24.227118] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.967 [2024-06-08 01:01:24.227203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.967 [2024-06-08 01:01:24.227228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.967 [2024-06-08 01:01:24.227237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.967 [2024-06-08 01:01:24.227243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:05.967 [2024-06-08 01:01:24.227262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.967 qpair failed and we were unable to recover it. 00:36:06.229 [2024-06-08 01:01:24.237073] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.229 [2024-06-08 01:01:24.237162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.229 [2024-06-08 01:01:24.237187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.229 [2024-06-08 01:01:24.237195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.229 [2024-06-08 01:01:24.237202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.229 [2024-06-08 01:01:24.237221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.229 qpair failed and we were unable to recover it. 00:36:06.229 [2024-06-08 01:01:24.247197] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.229 [2024-06-08 01:01:24.247288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.229 [2024-06-08 01:01:24.247313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.229 [2024-06-08 01:01:24.247322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.229 [2024-06-08 01:01:24.247329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.229 [2024-06-08 01:01:24.247347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.229 qpair failed and we were unable to recover it. 00:36:06.229 [2024-06-08 01:01:24.257147] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.229 [2024-06-08 01:01:24.257225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.229 [2024-06-08 01:01:24.257243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.229 [2024-06-08 01:01:24.257250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.229 [2024-06-08 01:01:24.257256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.229 [2024-06-08 01:01:24.257271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.229 qpair failed and we were unable to recover it. 00:36:06.229 [2024-06-08 01:01:24.267264] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.229 [2024-06-08 01:01:24.267340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.267357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.267364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.267370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.267384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.277132] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.277208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.277224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.277231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.277237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.277251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.287290] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.287378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.287395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.287409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.287415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.287429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.297300] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.297378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.297394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.297407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.297418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.297433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.307312] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.307387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.307409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.307417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.307423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.307437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.317466] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.317588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.317604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.317611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.317618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.317632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.327450] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.327536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.327553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.327559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.327565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.327579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.337453] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.337529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.337545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.337552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.337559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.337573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.347489] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.347567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.347584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.347591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.347597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.347611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.357462] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.357542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.357558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.357565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.357571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.357585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.367519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.367608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.367624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.367631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.367637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.367651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.377500] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.377579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.377595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.377602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.377608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.377623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.387546] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.387623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.387640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.387651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.230 [2024-06-08 01:01:24.387657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.230 [2024-06-08 01:01:24.387671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.230 qpair failed and we were unable to recover it. 00:36:06.230 [2024-06-08 01:01:24.397587] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.230 [2024-06-08 01:01:24.397674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.230 [2024-06-08 01:01:24.397690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.230 [2024-06-08 01:01:24.397697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.397703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.397716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.407627] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.407715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.407735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.407743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.407749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.407763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.417632] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.417711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.417727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.417734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.417740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.417754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.427766] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.427852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.427869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.427876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.427882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.427897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.437697] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.437771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.437788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.437795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.437801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.437815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.447752] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.447857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.447873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.447880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.447887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.447901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.457690] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.457768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.457784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.457791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.457797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.457811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.467762] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.467847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.467864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.467871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.467877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.467891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.477832] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.477912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.477928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.477939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.477945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.477959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.487811] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.487895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.487911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.487918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.487924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.487938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.497831] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.497906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.497923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.497930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.497936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.497950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.231 [2024-06-08 01:01:24.507903] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.231 [2024-06-08 01:01:24.507981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.231 [2024-06-08 01:01:24.507998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.231 [2024-06-08 01:01:24.508005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.231 [2024-06-08 01:01:24.508011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.231 [2024-06-08 01:01:24.508024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.231 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.517931] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.518008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.518024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.518031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.518037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.518051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.527970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.528062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.528087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.528095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.528102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.528120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.537953] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.538036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.538061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.538069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.538076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.538094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.547983] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.548066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.548091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.548099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.548107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.548125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.558041] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.558123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.558148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.558157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.558163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.558182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.568109] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.568239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.568268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.568277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.568283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.568302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.578086] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.578167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.578184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.578191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.578198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.578213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.588135] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.588215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.588232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.588239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.588245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.588260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.598052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.598144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.598161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.598168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.598174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.598189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.608077] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.608239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.608255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.608262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.608268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.608282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.618216] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.618291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.618308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.618315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.618321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.618336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.628222] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.628296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.628312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.628319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.628325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.628339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.638216] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.494 [2024-06-08 01:01:24.638292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.494 [2024-06-08 01:01:24.638309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.494 [2024-06-08 01:01:24.638316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.494 [2024-06-08 01:01:24.638322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.494 [2024-06-08 01:01:24.638336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.494 qpair failed and we were unable to recover it. 00:36:06.494 [2024-06-08 01:01:24.648262] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.648349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.648366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.648372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.648378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.648392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.658297] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.658378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.658398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.658411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.658418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.658432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.668375] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.668506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.668523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.668530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.668536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.668550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.678366] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.678445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.678461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.678468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.678474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.678488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.688383] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.688549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.688566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.688573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.688579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.688593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.698427] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.698509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.698525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.698532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.698539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.698556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.708450] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.708527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.708543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.708550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.708556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.708570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.718515] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.718615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.718632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.718639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.718645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.718659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.728536] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.728626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.728643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.728650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.728656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.728670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.738565] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.738651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.738667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.738674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.738680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.738694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.748611] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.748692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.748712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.748719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.748725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.748739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.758636] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.758717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.758733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.758740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.758746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.758760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.495 [2024-06-08 01:01:24.768635] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.495 [2024-06-08 01:01:24.768713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.495 [2024-06-08 01:01:24.768729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.495 [2024-06-08 01:01:24.768736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.495 [2024-06-08 01:01:24.768742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.495 [2024-06-08 01:01:24.768756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.495 qpair failed and we were unable to recover it. 00:36:06.758 [2024-06-08 01:01:24.778682] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.758 [2024-06-08 01:01:24.778862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.758 [2024-06-08 01:01:24.778879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.758 [2024-06-08 01:01:24.778886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.758 [2024-06-08 01:01:24.778892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.758 [2024-06-08 01:01:24.778906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.758 qpair failed and we were unable to recover it. 00:36:06.758 [2024-06-08 01:01:24.788694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.758 [2024-06-08 01:01:24.788771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.758 [2024-06-08 01:01:24.788788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.758 [2024-06-08 01:01:24.788795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.758 [2024-06-08 01:01:24.788801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.758 [2024-06-08 01:01:24.788819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.758 qpair failed and we were unable to recover it. 00:36:06.758 [2024-06-08 01:01:24.798778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.758 [2024-06-08 01:01:24.798904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.758 [2024-06-08 01:01:24.798921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.758 [2024-06-08 01:01:24.798928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.758 [2024-06-08 01:01:24.798934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.758 [2024-06-08 01:01:24.798948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.758 qpair failed and we were unable to recover it. 00:36:06.758 [2024-06-08 01:01:24.808756] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.758 [2024-06-08 01:01:24.808838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.758 [2024-06-08 01:01:24.808855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.758 [2024-06-08 01:01:24.808862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.758 [2024-06-08 01:01:24.808868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.758 [2024-06-08 01:01:24.808881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.758 qpair failed and we were unable to recover it. 00:36:06.758 [2024-06-08 01:01:24.818771] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.758 [2024-06-08 01:01:24.818848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.758 [2024-06-08 01:01:24.818865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.758 [2024-06-08 01:01:24.818872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.758 [2024-06-08 01:01:24.818877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.758 [2024-06-08 01:01:24.818892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.758 qpair failed and we were unable to recover it. 00:36:06.758 [2024-06-08 01:01:24.828791] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.758 [2024-06-08 01:01:24.828866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.758 [2024-06-08 01:01:24.828882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.758 [2024-06-08 01:01:24.828889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.828895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.828909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.838772] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.838846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.838866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.838873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.838879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.838893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.848867] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.848948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.848965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.848972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.848978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.848992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.858785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.858862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.858879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.858886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.858892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.858906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.868907] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.868985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.869002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.869009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.869015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.869030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.878939] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.879112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.879128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.879135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.879148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.879162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.888993] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.889087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.889112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.889120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.889127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.889145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.899016] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.899098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.899123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.899132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.899139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.899157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.909027] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.909107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.909132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.909140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.909147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.909166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.919091] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.919219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.919244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.919253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.919260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.919279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.929085] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.929171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.929189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.929196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.929203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.929217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.939145] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.939249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.939266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.939273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.939280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.939294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.949134] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.949214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.949231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.949238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.949244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.759 [2024-06-08 01:01:24.949258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.759 qpair failed and we were unable to recover it. 00:36:06.759 [2024-06-08 01:01:24.959165] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.759 [2024-06-08 01:01:24.959250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.759 [2024-06-08 01:01:24.959266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.759 [2024-06-08 01:01:24.959273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.759 [2024-06-08 01:01:24.959279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.760 [2024-06-08 01:01:24.959293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.760 qpair failed and we were unable to recover it. 00:36:06.760 [2024-06-08 01:01:24.969225] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.760 [2024-06-08 01:01:24.969307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.760 [2024-06-08 01:01:24.969324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.760 [2024-06-08 01:01:24.969331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.760 [2024-06-08 01:01:24.969342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.760 [2024-06-08 01:01:24.969356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.760 qpair failed and we were unable to recover it. 00:36:06.760 [2024-06-08 01:01:24.979224] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.760 [2024-06-08 01:01:24.979305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.760 [2024-06-08 01:01:24.979321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.760 [2024-06-08 01:01:24.979328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.760 [2024-06-08 01:01:24.979334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.760 [2024-06-08 01:01:24.979348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.760 qpair failed and we were unable to recover it. 00:36:06.760 [2024-06-08 01:01:24.989264] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.760 [2024-06-08 01:01:24.989341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.760 [2024-06-08 01:01:24.989357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.760 [2024-06-08 01:01:24.989364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.760 [2024-06-08 01:01:24.989370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.760 [2024-06-08 01:01:24.989384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.760 qpair failed and we were unable to recover it. 00:36:06.760 [2024-06-08 01:01:24.999291] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.760 [2024-06-08 01:01:24.999370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.760 [2024-06-08 01:01:24.999386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.760 [2024-06-08 01:01:24.999393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.760 [2024-06-08 01:01:24.999398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.760 [2024-06-08 01:01:24.999418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.760 qpair failed and we were unable to recover it. 00:36:06.760 [2024-06-08 01:01:25.009303] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.760 [2024-06-08 01:01:25.009384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.760 [2024-06-08 01:01:25.009400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.760 [2024-06-08 01:01:25.009413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.760 [2024-06-08 01:01:25.009419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.760 [2024-06-08 01:01:25.009433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.760 qpair failed and we were unable to recover it. 00:36:06.760 [2024-06-08 01:01:25.019421] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.760 [2024-06-08 01:01:25.019542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.760 [2024-06-08 01:01:25.019558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.760 [2024-06-08 01:01:25.019565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.760 [2024-06-08 01:01:25.019571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.760 [2024-06-08 01:01:25.019585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.760 qpair failed and we were unable to recover it. 00:36:06.760 [2024-06-08 01:01:25.029361] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.760 [2024-06-08 01:01:25.029444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.760 [2024-06-08 01:01:25.029460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.760 [2024-06-08 01:01:25.029467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.760 [2024-06-08 01:01:25.029473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.760 [2024-06-08 01:01:25.029487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.760 qpair failed and we were unable to recover it. 00:36:06.760 [2024-06-08 01:01:25.039447] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.760 [2024-06-08 01:01:25.039526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.760 [2024-06-08 01:01:25.039542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.760 [2024-06-08 01:01:25.039549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.760 [2024-06-08 01:01:25.039555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:06.760 [2024-06-08 01:01:25.039569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.760 qpair failed and we were unable to recover it. 00:36:07.022 [2024-06-08 01:01:25.049432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.022 [2024-06-08 01:01:25.049523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.022 [2024-06-08 01:01:25.049540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.022 [2024-06-08 01:01:25.049547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.022 [2024-06-08 01:01:25.049553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.022 [2024-06-08 01:01:25.049567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-06-08 01:01:25.059478] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.022 [2024-06-08 01:01:25.059556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.022 [2024-06-08 01:01:25.059572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.022 [2024-06-08 01:01:25.059580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.022 [2024-06-08 01:01:25.059590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.022 [2024-06-08 01:01:25.059604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-06-08 01:01:25.069487] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.022 [2024-06-08 01:01:25.069563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.022 [2024-06-08 01:01:25.069579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.022 [2024-06-08 01:01:25.069586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.022 [2024-06-08 01:01:25.069592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.022 [2024-06-08 01:01:25.069606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.022 qpair failed and we were unable to recover it. 00:36:07.022 [2024-06-08 01:01:25.079546] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.022 [2024-06-08 01:01:25.079639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.022 [2024-06-08 01:01:25.079655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.022 [2024-06-08 01:01:25.079662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.022 [2024-06-08 01:01:25.079668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.079682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.089575] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.089668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.089684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.089691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.089697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.089711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.099582] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.099664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.099680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.099687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.099693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.099708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.109606] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.109683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.109699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.109706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.109712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.109726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.119651] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.119727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.119743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.119750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.119755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.119770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.129722] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.129833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.129850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.129857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.129862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.129876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.139735] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.139817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.139833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.139840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.139846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.139859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.149744] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.149822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.149839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.149849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.149855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.149869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.159765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.159864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.159881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.159888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.159894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.159908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.169739] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.169863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.169879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.169887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.169893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.169907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.179707] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.179788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.179804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.179811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.179818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.179831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.189827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.189901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.189918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.189925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.189931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.189945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.199871] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.199949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.199966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.199973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.199979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.199992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.209883] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.209967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.023 [2024-06-08 01:01:25.209983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.023 [2024-06-08 01:01:25.209990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.023 [2024-06-08 01:01:25.209996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.023 [2024-06-08 01:01:25.210009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.023 qpair failed and we were unable to recover it. 00:36:07.023 [2024-06-08 01:01:25.219928] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.023 [2024-06-08 01:01:25.220005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.024 [2024-06-08 01:01:25.220021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.024 [2024-06-08 01:01:25.220028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.024 [2024-06-08 01:01:25.220034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.024 [2024-06-08 01:01:25.220048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-06-08 01:01:25.229969] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.024 [2024-06-08 01:01:25.230045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.024 [2024-06-08 01:01:25.230061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.024 [2024-06-08 01:01:25.230068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.024 [2024-06-08 01:01:25.230074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.024 [2024-06-08 01:01:25.230087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-06-08 01:01:25.239965] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.024 [2024-06-08 01:01:25.240049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.024 [2024-06-08 01:01:25.240074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.024 [2024-06-08 01:01:25.240087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.024 [2024-06-08 01:01:25.240094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.024 [2024-06-08 01:01:25.240113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-06-08 01:01:25.250017] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.024 [2024-06-08 01:01:25.250110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.024 [2024-06-08 01:01:25.250135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.024 [2024-06-08 01:01:25.250144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.024 [2024-06-08 01:01:25.250150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.024 [2024-06-08 01:01:25.250169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-06-08 01:01:25.260004] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.024 [2024-06-08 01:01:25.260122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.024 [2024-06-08 01:01:25.260147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.024 [2024-06-08 01:01:25.260155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.024 [2024-06-08 01:01:25.260162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.024 [2024-06-08 01:01:25.260181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-06-08 01:01:25.270051] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.024 [2024-06-08 01:01:25.270131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.024 [2024-06-08 01:01:25.270156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.024 [2024-06-08 01:01:25.270164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.024 [2024-06-08 01:01:25.270171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.024 [2024-06-08 01:01:25.270189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-06-08 01:01:25.280082] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.024 [2024-06-08 01:01:25.280160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.024 [2024-06-08 01:01:25.280178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.024 [2024-06-08 01:01:25.280185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.024 [2024-06-08 01:01:25.280191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.024 [2024-06-08 01:01:25.280205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-06-08 01:01:25.290102] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.024 [2024-06-08 01:01:25.290187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.024 [2024-06-08 01:01:25.290203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.024 [2024-06-08 01:01:25.290210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.024 [2024-06-08 01:01:25.290216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.024 [2024-06-08 01:01:25.290230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.024 [2024-06-08 01:01:25.300163] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.024 [2024-06-08 01:01:25.300241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.024 [2024-06-08 01:01:25.300257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.024 [2024-06-08 01:01:25.300264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.024 [2024-06-08 01:01:25.300270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.024 [2024-06-08 01:01:25.300284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.024 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.310143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.310217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.310234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.310241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.310247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.310261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.320240] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.320319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.320336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.320343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.320349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.320362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.330200] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.330280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.330296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.330307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.330313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.330326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.340228] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.340304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.340321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.340327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.340333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.340347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.350270] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.350349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.350366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.350373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.350379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.350393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.360314] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.360390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.360413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.360421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.360427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.360441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.370364] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.370451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.370468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.370474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.370481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.370495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.380323] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.380409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.380425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.380432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.380438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.380452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.390358] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.390435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.390451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.390458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.390464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.390477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.400342] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.400424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.400441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.400448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.400454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.400468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.410371] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.410459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.410475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.410482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.306 [2024-06-08 01:01:25.410488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.306 [2024-06-08 01:01:25.410502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.306 qpair failed and we were unable to recover it. 00:36:07.306 [2024-06-08 01:01:25.420465] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.306 [2024-06-08 01:01:25.420542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.306 [2024-06-08 01:01:25.420562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.306 [2024-06-08 01:01:25.420569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.420575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.420589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.430519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.430603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.430621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.430628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.430634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.430648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.440508] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.440587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.440603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.440610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.440616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.440630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.450568] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.450655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.450672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.450679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.450685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.450699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.460548] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.460625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.460641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.460648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.460654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.460671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.470625] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.470709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.470725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.470732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.470738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.470751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.480636] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.480739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.480755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.480762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.480768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.480782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.490558] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.490642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.490658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.490665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.490672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.490685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.500695] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.500783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.500799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.500806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.500812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.500825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.510622] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.510721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.510742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.510750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.510755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.510770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.520733] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.520811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.520827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.520834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.520840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.520854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.530761] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.530838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.530854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.530861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.530867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.307 [2024-06-08 01:01:25.530882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.307 qpair failed and we were unable to recover it. 00:36:07.307 [2024-06-08 01:01:25.540764] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.307 [2024-06-08 01:01:25.540866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.307 [2024-06-08 01:01:25.540883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.307 [2024-06-08 01:01:25.540890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.307 [2024-06-08 01:01:25.540896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.308 [2024-06-08 01:01:25.540910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-06-08 01:01:25.550810] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.308 [2024-06-08 01:01:25.551013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.308 [2024-06-08 01:01:25.551029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.308 [2024-06-08 01:01:25.551036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.308 [2024-06-08 01:01:25.551042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.308 [2024-06-08 01:01:25.551059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-06-08 01:01:25.560867] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.308 [2024-06-08 01:01:25.560947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.308 [2024-06-08 01:01:25.560963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.308 [2024-06-08 01:01:25.560970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.308 [2024-06-08 01:01:25.560976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.308 [2024-06-08 01:01:25.560990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-06-08 01:01:25.570886] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.308 [2024-06-08 01:01:25.570973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.308 [2024-06-08 01:01:25.570990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.308 [2024-06-08 01:01:25.570997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.308 [2024-06-08 01:01:25.571003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.308 [2024-06-08 01:01:25.571017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.308 [2024-06-08 01:01:25.580892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.308 [2024-06-08 01:01:25.580991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.308 [2024-06-08 01:01:25.581008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.308 [2024-06-08 01:01:25.581015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.308 [2024-06-08 01:01:25.581021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.308 [2024-06-08 01:01:25.581036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.308 qpair failed and we were unable to recover it. 00:36:07.569 [2024-06-08 01:01:25.590914] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.569 [2024-06-08 01:01:25.590990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.569 [2024-06-08 01:01:25.591007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.569 [2024-06-08 01:01:25.591014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.569 [2024-06-08 01:01:25.591020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.569 [2024-06-08 01:01:25.591034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.569 qpair failed and we were unable to recover it. 00:36:07.569 [2024-06-08 01:01:25.600999] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.569 [2024-06-08 01:01:25.601124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.569 [2024-06-08 01:01:25.601144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.569 [2024-06-08 01:01:25.601151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.569 [2024-06-08 01:01:25.601157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.569 [2024-06-08 01:01:25.601171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.569 qpair failed and we were unable to recover it. 00:36:07.569 [2024-06-08 01:01:25.610991] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.569 [2024-06-08 01:01:25.611082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.569 [2024-06-08 01:01:25.611107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.569 [2024-06-08 01:01:25.611115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.569 [2024-06-08 01:01:25.611122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.569 [2024-06-08 01:01:25.611141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.569 qpair failed and we were unable to recover it. 00:36:07.569 [2024-06-08 01:01:25.620947] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.569 [2024-06-08 01:01:25.621036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.569 [2024-06-08 01:01:25.621060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.569 [2024-06-08 01:01:25.621068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.569 [2024-06-08 01:01:25.621075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.569 [2024-06-08 01:01:25.621093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.569 qpair failed and we were unable to recover it. 00:36:07.569 [2024-06-08 01:01:25.631061] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.569 [2024-06-08 01:01:25.631147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.569 [2024-06-08 01:01:25.631171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.569 [2024-06-08 01:01:25.631180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.569 [2024-06-08 01:01:25.631186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.569 [2024-06-08 01:01:25.631204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.569 qpair failed and we were unable to recover it. 00:36:07.569 [2024-06-08 01:01:25.641045] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.641125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.641143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.641150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.641156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.641179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.651078] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.651161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.651178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.651185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.651191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.651205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.661107] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.661185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.661202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.661209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.661215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.661229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.671139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.671218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.671236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.671243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.671249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.671263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.681244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.681321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.681337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.681345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.681352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.681367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.691197] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.691314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.691334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.691341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.691347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.691361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.701264] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.701377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.701394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.701406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.701413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.701427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.711267] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.711342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.711358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.711365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.711371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.711385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.721190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.721283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.721299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.721306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.721312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.721326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.731287] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.731373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.731389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.731396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.731410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.731424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.741367] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.741469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.741485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.741493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.741498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.741512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.751365] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.751444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.751460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.751467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.751473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.751488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.761359] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.761442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.761458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.761465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.570 [2024-06-08 01:01:25.761471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.570 [2024-06-08 01:01:25.761485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.570 qpair failed and we were unable to recover it. 00:36:07.570 [2024-06-08 01:01:25.771320] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.570 [2024-06-08 01:01:25.771400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.570 [2024-06-08 01:01:25.771420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.570 [2024-06-08 01:01:25.771427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.571 [2024-06-08 01:01:25.771433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.571 [2024-06-08 01:01:25.771447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.571 qpair failed and we were unable to recover it. 00:36:07.571 [2024-06-08 01:01:25.781446] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.571 [2024-06-08 01:01:25.781534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.571 [2024-06-08 01:01:25.781551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.571 [2024-06-08 01:01:25.781557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.571 [2024-06-08 01:01:25.781563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.571 [2024-06-08 01:01:25.781577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.571 qpair failed and we were unable to recover it. 00:36:07.571 [2024-06-08 01:01:25.791533] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.571 [2024-06-08 01:01:25.791615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.571 [2024-06-08 01:01:25.791632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.571 [2024-06-08 01:01:25.791639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.571 [2024-06-08 01:01:25.791645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.571 [2024-06-08 01:01:25.791659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.571 qpair failed and we were unable to recover it. 00:36:07.571 [2024-06-08 01:01:25.801510] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.571 [2024-06-08 01:01:25.801588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.571 [2024-06-08 01:01:25.801604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.571 [2024-06-08 01:01:25.801611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.571 [2024-06-08 01:01:25.801617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.571 [2024-06-08 01:01:25.801630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.571 qpair failed and we were unable to recover it. 00:36:07.571 [2024-06-08 01:01:25.811551] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.571 [2024-06-08 01:01:25.811656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.571 [2024-06-08 01:01:25.811673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.571 [2024-06-08 01:01:25.811679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.571 [2024-06-08 01:01:25.811686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.571 [2024-06-08 01:01:25.811699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.571 qpair failed and we were unable to recover it. 00:36:07.571 [2024-06-08 01:01:25.821544] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.571 [2024-06-08 01:01:25.821622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.571 [2024-06-08 01:01:25.821638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.571 [2024-06-08 01:01:25.821645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.571 [2024-06-08 01:01:25.821655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.571 [2024-06-08 01:01:25.821669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.571 qpair failed and we were unable to recover it. 00:36:07.571 [2024-06-08 01:01:25.831578] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.571 [2024-06-08 01:01:25.831656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.571 [2024-06-08 01:01:25.831673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.571 [2024-06-08 01:01:25.831679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.571 [2024-06-08 01:01:25.831686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.571 [2024-06-08 01:01:25.831699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.571 qpair failed and we were unable to recover it. 00:36:07.571 [2024-06-08 01:01:25.841661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.571 [2024-06-08 01:01:25.841743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.571 [2024-06-08 01:01:25.841759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.571 [2024-06-08 01:01:25.841766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.571 [2024-06-08 01:01:25.841772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.571 [2024-06-08 01:01:25.841786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.571 qpair failed and we were unable to recover it. 00:36:07.571 [2024-06-08 01:01:25.851683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.571 [2024-06-08 01:01:25.851766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.571 [2024-06-08 01:01:25.851783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.832 [2024-06-08 01:01:25.851790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.832 [2024-06-08 01:01:25.851798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.832 [2024-06-08 01:01:25.851812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.832 qpair failed and we were unable to recover it. 00:36:07.832 [2024-06-08 01:01:25.861689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.832 [2024-06-08 01:01:25.861768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.832 [2024-06-08 01:01:25.861784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.832 [2024-06-08 01:01:25.861792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.832 [2024-06-08 01:01:25.861798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.832 [2024-06-08 01:01:25.861812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.832 qpair failed and we were unable to recover it. 00:36:07.832 [2024-06-08 01:01:25.871765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.832 [2024-06-08 01:01:25.871898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.832 [2024-06-08 01:01:25.871914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.832 [2024-06-08 01:01:25.871921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.832 [2024-06-08 01:01:25.871927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.832 [2024-06-08 01:01:25.871941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.832 qpair failed and we were unable to recover it. 00:36:07.832 [2024-06-08 01:01:25.881753] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.881860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.881877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.881884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.881890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.881905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.891811] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.891925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.891942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.891949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.891955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.891969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.901804] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.901917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.901933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.901940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.901946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.901960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.911820] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.911899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.911915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.911926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.911933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.911947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.921867] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.921950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.921965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.921972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.921978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.921992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.931951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.932034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.932050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.932057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.932063] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.932077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.941901] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.942010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.942027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.942035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.942040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.942055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.951923] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.952012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.952036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.952046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.952052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.952072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.961962] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.962120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.962138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.962145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.962151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.962166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.971986] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.972070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.972086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.972093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.972099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.972113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.982049] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.982125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.982141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.982148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.982154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.982168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:25.992052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:25.992127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:25.992144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:25.992151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:25.992157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:25.992171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:26.002078] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:26.002154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:26.002170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:26.002181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:26.002187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:26.002202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:26.012092] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:26.012174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:26.012190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:26.012197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:26.012203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:26.012217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:26.022105] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:26.022197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:26.022222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:26.022231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:26.022238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:26.022256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:26.032147] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:26.032233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:26.032257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:26.032266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:26.032273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:26.032291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:26.042133] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:26.042233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:26.042251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:26.042258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:26.042264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:26.042279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:26.052253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:26.052339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:26.052356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.833 [2024-06-08 01:01:26.052363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.833 [2024-06-08 01:01:26.052369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.833 [2024-06-08 01:01:26.052384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.833 qpair failed and we were unable to recover it. 00:36:07.833 [2024-06-08 01:01:26.062284] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.833 [2024-06-08 01:01:26.062414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.833 [2024-06-08 01:01:26.062432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.834 [2024-06-08 01:01:26.062439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.834 [2024-06-08 01:01:26.062445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.834 [2024-06-08 01:01:26.062459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.834 qpair failed and we were unable to recover it. 00:36:07.834 [2024-06-08 01:01:26.072324] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.834 [2024-06-08 01:01:26.072449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.834 [2024-06-08 01:01:26.072465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.834 [2024-06-08 01:01:26.072473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.834 [2024-06-08 01:01:26.072478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.834 [2024-06-08 01:01:26.072493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.834 qpair failed and we were unable to recover it. 00:36:07.834 [2024-06-08 01:01:26.082278] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.834 [2024-06-08 01:01:26.082359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.834 [2024-06-08 01:01:26.082375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.834 [2024-06-08 01:01:26.082382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.834 [2024-06-08 01:01:26.082388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.834 [2024-06-08 01:01:26.082409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.834 qpair failed and we were unable to recover it. 00:36:07.834 [2024-06-08 01:01:26.092342] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.834 [2024-06-08 01:01:26.092428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.834 [2024-06-08 01:01:26.092445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.834 [2024-06-08 01:01:26.092456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.834 [2024-06-08 01:01:26.092462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.834 [2024-06-08 01:01:26.092476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.834 qpair failed and we were unable to recover it. 00:36:07.834 [2024-06-08 01:01:26.102268] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.834 [2024-06-08 01:01:26.102342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.834 [2024-06-08 01:01:26.102358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.834 [2024-06-08 01:01:26.102365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.834 [2024-06-08 01:01:26.102371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.834 [2024-06-08 01:01:26.102384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.834 qpair failed and we were unable to recover it. 00:36:07.834 [2024-06-08 01:01:26.112453] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.834 [2024-06-08 01:01:26.112574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.834 [2024-06-08 01:01:26.112591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.834 [2024-06-08 01:01:26.112598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.834 [2024-06-08 01:01:26.112604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:07.834 [2024-06-08 01:01:26.112618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.834 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.122424] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.122502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.122519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.122526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.122533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.122547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.132429] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.132512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.132528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.132535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.132541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.132555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.142469] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.142554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.142570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.142577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.142583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.142597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.152512] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.152595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.152611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.152618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.152624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.152638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.162542] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.162621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.162637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.162644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.162650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.162664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.172472] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.172551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.172568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.172574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.172581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.172594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.182612] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.182691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.182711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.182718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.182724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.182738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.192705] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.192786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.192802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.192808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.192815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.192828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.202629] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.202707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.202723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.202730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.202736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.202750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.212664] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.212746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.212762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.212769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.212775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.212788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.222647] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.222718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.222734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.222741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.222747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.222765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.232736] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.232813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.232830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.232837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.096 [2024-06-08 01:01:26.232843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.096 [2024-06-08 01:01:26.232856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.096 qpair failed and we were unable to recover it. 00:36:08.096 [2024-06-08 01:01:26.242802] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.096 [2024-06-08 01:01:26.242878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.096 [2024-06-08 01:01:26.242894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.096 [2024-06-08 01:01:26.242901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.242907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.242921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.252797] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.252901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.252917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.252924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.252931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.252944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.262745] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.262822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.262838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.262845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.262851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.262865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.272805] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.272894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.272914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.272921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.272927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.272940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.282843] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.282920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.282935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.282942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.282948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.282962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.292835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.292913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.292929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.292936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.292942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.292956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.302855] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.302934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.302951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.302958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.302964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.302979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.312916] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.312994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.313011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.313018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.313024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.313041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.323033] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.323120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.323137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.323144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.323150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.323163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.332967] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.333049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.333074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.333083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.333090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.333108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.343052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.343147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.343172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.343181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.343187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.343206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.353105] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.353241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.353265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.353274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.353280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.353299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.362960] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.363040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.363063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.363070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.363077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.363092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.097 [2024-06-08 01:01:26.372951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.097 [2024-06-08 01:01:26.373035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.097 [2024-06-08 01:01:26.373052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.097 [2024-06-08 01:01:26.373060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.097 [2024-06-08 01:01:26.373067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.097 [2024-06-08 01:01:26.373081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.097 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.383002] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.383078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.383094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.383101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.383107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.383122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.393143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.393242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.393258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.393265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.393272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.393286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.403257] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.403366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.403383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.403390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.403397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.403424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.413203] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.413321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.413338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.413344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.413351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.413365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.423198] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.423317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.423333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.423341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.423346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.423360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.433327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.433410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.433428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.433435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.433441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.433456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.443274] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.443349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.443365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.443372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.443378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.443392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.453275] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.453351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.453372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.453379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.453385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.453399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.463291] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.463366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.463383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.463390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.463396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.463415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.473362] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.473441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.473458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.473465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.473471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.473485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.483318] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.483391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.483413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.483420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.483426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.483440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.493411] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.493485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.493502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.493509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.493519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.493533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.503396] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.503474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.503490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.503497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.503503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.503517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.513417] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.513533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.513550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.513557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.359 [2024-06-08 01:01:26.513562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.359 [2024-06-08 01:01:26.513576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.359 qpair failed and we were unable to recover it. 00:36:08.359 [2024-06-08 01:01:26.523504] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.359 [2024-06-08 01:01:26.523584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.359 [2024-06-08 01:01:26.523600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.359 [2024-06-08 01:01:26.523606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.523612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.523626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.533499] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.533572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.533588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.533596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.533601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.533615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.543505] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.543616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.543632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.543639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.543645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.543659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.553534] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.553608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.553624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.553631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.553637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.553651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.563605] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.563685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.563701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.563708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.563714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.563728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.573586] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.573686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.573703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.573710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.573716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.573734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.583604] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.583684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.583701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.583709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.583718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.583732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.593688] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.593762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.593779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.593786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.593793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.593807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.603609] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.603687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.603704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.603710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.603717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.603731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.613700] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.613779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.613796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.613803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.613809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.613822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.623721] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.623791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.623808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.623814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.623820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.623835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.360 [2024-06-08 01:01:26.633730] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.360 [2024-06-08 01:01:26.633818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.360 [2024-06-08 01:01:26.633835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.360 [2024-06-08 01:01:26.633842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.360 [2024-06-08 01:01:26.633848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.360 [2024-06-08 01:01:26.633861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.360 qpair failed and we were unable to recover it. 00:36:08.621 [2024-06-08 01:01:26.643809] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.621 [2024-06-08 01:01:26.643885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.621 [2024-06-08 01:01:26.643902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.621 [2024-06-08 01:01:26.643909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.621 [2024-06-08 01:01:26.643915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.621 [2024-06-08 01:01:26.643928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.621 qpair failed and we were unable to recover it. 00:36:08.621 [2024-06-08 01:01:26.653783] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.621 [2024-06-08 01:01:26.653856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.621 [2024-06-08 01:01:26.653873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.621 [2024-06-08 01:01:26.653880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.621 [2024-06-08 01:01:26.653886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.621 [2024-06-08 01:01:26.653900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.621 qpair failed and we were unable to recover it. 00:36:08.621 [2024-06-08 01:01:26.663825] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.621 [2024-06-08 01:01:26.663935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.621 [2024-06-08 01:01:26.663951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.621 [2024-06-08 01:01:26.663958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.621 [2024-06-08 01:01:26.663964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.621 [2024-06-08 01:01:26.663979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.621 qpair failed and we were unable to recover it. 00:36:08.621 [2024-06-08 01:01:26.673878] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.621 [2024-06-08 01:01:26.673952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.621 [2024-06-08 01:01:26.673968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.621 [2024-06-08 01:01:26.673984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.621 [2024-06-08 01:01:26.673990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.621 [2024-06-08 01:01:26.674003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.621 qpair failed and we were unable to recover it. 00:36:08.621 [2024-06-08 01:01:26.683908] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.621 [2024-06-08 01:01:26.683979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.621 [2024-06-08 01:01:26.683995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.621 [2024-06-08 01:01:26.684002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.621 [2024-06-08 01:01:26.684008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.621 [2024-06-08 01:01:26.684022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.621 qpair failed and we were unable to recover it. 00:36:08.621 [2024-06-08 01:01:26.693903] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.621 [2024-06-08 01:01:26.693979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.621 [2024-06-08 01:01:26.693996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.621 [2024-06-08 01:01:26.694003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.621 [2024-06-08 01:01:26.694009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.621 [2024-06-08 01:01:26.694022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.621 qpair failed and we were unable to recover it. 00:36:08.621 [2024-06-08 01:01:26.703902] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.621 [2024-06-08 01:01:26.703977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.621 [2024-06-08 01:01:26.703994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.621 [2024-06-08 01:01:26.704001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.621 [2024-06-08 01:01:26.704007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.621 [2024-06-08 01:01:26.704021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.621 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.713951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.714030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.714055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.714063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.714070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.714089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.724052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.724181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.724205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.724214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.724220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.724239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.734017] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.734101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.734125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.734134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.734141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.734159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.744065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.744191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.744215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.744224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.744230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.744249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.754052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.754140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.754165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.754174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.754180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.754199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.764135] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.764221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.764245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.764259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.764266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.764284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.774115] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.774193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.774212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.774219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.774225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.774240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.784127] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.784200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.784217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.784224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.784230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.784244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.794162] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.794232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.794249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.794256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.794262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.794276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.804355] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.804435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.804452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.804459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.804465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.804479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.814211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.814304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.814321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.814328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.814334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.814347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.824239] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.824313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.824329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.824336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.824342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.824356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.622 qpair failed and we were unable to recover it. 00:36:08.622 [2024-06-08 01:01:26.834299] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.622 [2024-06-08 01:01:26.834371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.622 [2024-06-08 01:01:26.834387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.622 [2024-06-08 01:01:26.834394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.622 [2024-06-08 01:01:26.834400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.622 [2024-06-08 01:01:26.834420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.623 qpair failed and we were unable to recover it. 00:36:08.623 [2024-06-08 01:01:26.844424] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.623 [2024-06-08 01:01:26.844523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.623 [2024-06-08 01:01:26.844539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.623 [2024-06-08 01:01:26.844547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.623 [2024-06-08 01:01:26.844553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.623 [2024-06-08 01:01:26.844568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.623 qpair failed and we were unable to recover it. 00:36:08.623 [2024-06-08 01:01:26.854436] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.623 [2024-06-08 01:01:26.854541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.623 [2024-06-08 01:01:26.854557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.623 [2024-06-08 01:01:26.854567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.623 [2024-06-08 01:01:26.854573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.623 [2024-06-08 01:01:26.854588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.623 qpair failed and we were unable to recover it. 00:36:08.623 [2024-06-08 01:01:26.864350] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.623 [2024-06-08 01:01:26.864422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.623 [2024-06-08 01:01:26.864438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.623 [2024-06-08 01:01:26.864445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.623 [2024-06-08 01:01:26.864451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.623 [2024-06-08 01:01:26.864467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.623 qpair failed and we were unable to recover it. 00:36:08.623 [2024-06-08 01:01:26.874383] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.623 [2024-06-08 01:01:26.874453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.623 [2024-06-08 01:01:26.874469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.623 [2024-06-08 01:01:26.874476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.623 [2024-06-08 01:01:26.874482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.623 [2024-06-08 01:01:26.874495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.623 qpair failed and we were unable to recover it. 00:36:08.623 [2024-06-08 01:01:26.884436] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.623 [2024-06-08 01:01:26.884516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.623 [2024-06-08 01:01:26.884532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.623 [2024-06-08 01:01:26.884539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.623 [2024-06-08 01:01:26.884545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.623 [2024-06-08 01:01:26.884559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.623 qpair failed and we were unable to recover it. 00:36:08.623 [2024-06-08 01:01:26.894351] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.623 [2024-06-08 01:01:26.894435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.623 [2024-06-08 01:01:26.894452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.623 [2024-06-08 01:01:26.894459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.623 [2024-06-08 01:01:26.894464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.623 [2024-06-08 01:01:26.894478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.623 qpair failed and we were unable to recover it. 00:36:08.885 [2024-06-08 01:01:26.904453] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.885 [2024-06-08 01:01:26.904527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.885 [2024-06-08 01:01:26.904543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.885 [2024-06-08 01:01:26.904550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.885 [2024-06-08 01:01:26.904556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.885 [2024-06-08 01:01:26.904571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.885 qpair failed and we were unable to recover it. 00:36:08.885 [2024-06-08 01:01:26.914585] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.885 [2024-06-08 01:01:26.914690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.885 [2024-06-08 01:01:26.914706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.885 [2024-06-08 01:01:26.914713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.885 [2024-06-08 01:01:26.914719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.885 [2024-06-08 01:01:26.914733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.885 qpair failed and we were unable to recover it. 00:36:08.885 [2024-06-08 01:01:26.924556] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.885 [2024-06-08 01:01:26.924634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.885 [2024-06-08 01:01:26.924650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.885 [2024-06-08 01:01:26.924657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.885 [2024-06-08 01:01:26.924663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.885 [2024-06-08 01:01:26.924677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.885 qpair failed and we were unable to recover it. 00:36:08.885 [2024-06-08 01:01:26.934534] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.885 [2024-06-08 01:01:26.934649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.885 [2024-06-08 01:01:26.934665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.885 [2024-06-08 01:01:26.934672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.885 [2024-06-08 01:01:26.934679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.885 [2024-06-08 01:01:26.934692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.885 qpair failed and we were unable to recover it. 00:36:08.885 [2024-06-08 01:01:26.944555] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.885 [2024-06-08 01:01:26.944623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.885 [2024-06-08 01:01:26.944643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.885 [2024-06-08 01:01:26.944650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.885 [2024-06-08 01:01:26.944656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.885 [2024-06-08 01:01:26.944670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.885 qpair failed and we were unable to recover it. 00:36:08.885 [2024-06-08 01:01:26.954630] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.885 [2024-06-08 01:01:26.954712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.885 [2024-06-08 01:01:26.954728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.885 [2024-06-08 01:01:26.954735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.885 [2024-06-08 01:01:26.954741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.885 [2024-06-08 01:01:26.954756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.885 qpair failed and we were unable to recover it. 00:36:08.885 [2024-06-08 01:01:26.964630] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.885 [2024-06-08 01:01:26.964702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.885 [2024-06-08 01:01:26.964719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.885 [2024-06-08 01:01:26.964726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.885 [2024-06-08 01:01:26.964732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.885 [2024-06-08 01:01:26.964745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.885 qpair failed and we were unable to recover it. 00:36:08.885 [2024-06-08 01:01:26.974644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.885 [2024-06-08 01:01:26.974724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.885 [2024-06-08 01:01:26.974740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.885 [2024-06-08 01:01:26.974747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.885 [2024-06-08 01:01:26.974753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.885 [2024-06-08 01:01:26.974766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.885 qpair failed and we were unable to recover it. 00:36:08.885 [2024-06-08 01:01:26.984711] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.885 [2024-06-08 01:01:26.984790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.885 [2024-06-08 01:01:26.984806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.885 [2024-06-08 01:01:26.984813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.885 [2024-06-08 01:01:26.984819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:26.984833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:26.994689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:26.994759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:26.994775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:26.994782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:26.994788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:26.994801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.004785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.004864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.004880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.004887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.004893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.004907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.014779] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.014856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.014872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.014878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.014884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.014898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.024788] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.024858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.024875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.024882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.024888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.024901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.034691] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.034762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.034781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.034788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.034794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.034808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.044768] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.044848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.044864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.044871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.044877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.044891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.054888] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.054986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.055002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.055010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.055015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.055029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.064805] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.064876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.064892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.064899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.064905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.064918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.074941] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.075012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.075028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.075034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.075040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.075057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.084980] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.085056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.085073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.085080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.085086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.085100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.094964] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.095045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.095070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.095079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.095085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.095104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.104980] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.105054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.105078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.886 [2024-06-08 01:01:27.105087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.886 [2024-06-08 01:01:27.105094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.886 [2024-06-08 01:01:27.105113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.886 qpair failed and we were unable to recover it. 00:36:08.886 [2024-06-08 01:01:27.115055] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.886 [2024-06-08 01:01:27.115132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.886 [2024-06-08 01:01:27.115157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.887 [2024-06-08 01:01:27.115166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.887 [2024-06-08 01:01:27.115172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.887 [2024-06-08 01:01:27.115191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.887 qpair failed and we were unable to recover it. 00:36:08.887 [2024-06-08 01:01:27.125095] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.887 [2024-06-08 01:01:27.125181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.887 [2024-06-08 01:01:27.125203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.887 [2024-06-08 01:01:27.125211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.887 [2024-06-08 01:01:27.125217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.887 [2024-06-08 01:01:27.125232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.887 qpair failed and we were unable to recover it. 00:36:08.887 [2024-06-08 01:01:27.135084] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.887 [2024-06-08 01:01:27.135164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.887 [2024-06-08 01:01:27.135180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.887 [2024-06-08 01:01:27.135188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.887 [2024-06-08 01:01:27.135194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.887 [2024-06-08 01:01:27.135208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.887 qpair failed and we were unable to recover it. 00:36:08.887 [2024-06-08 01:01:27.145084] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.887 [2024-06-08 01:01:27.145162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.887 [2024-06-08 01:01:27.145187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.887 [2024-06-08 01:01:27.145196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.887 [2024-06-08 01:01:27.145202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.887 [2024-06-08 01:01:27.145221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.887 qpair failed and we were unable to recover it. 00:36:08.887 [2024-06-08 01:01:27.155168] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.887 [2024-06-08 01:01:27.155243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.887 [2024-06-08 01:01:27.155261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.887 [2024-06-08 01:01:27.155269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.887 [2024-06-08 01:01:27.155275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.887 [2024-06-08 01:01:27.155290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.887 qpair failed and we were unable to recover it. 00:36:08.887 [2024-06-08 01:01:27.165184] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.887 [2024-06-08 01:01:27.165262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.887 [2024-06-08 01:01:27.165279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.887 [2024-06-08 01:01:27.165286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.887 [2024-06-08 01:01:27.165293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:08.887 [2024-06-08 01:01:27.165315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:08.887 qpair failed and we were unable to recover it. 00:36:09.148 [2024-06-08 01:01:27.175162] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.148 [2024-06-08 01:01:27.175237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.148 [2024-06-08 01:01:27.175254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.148 [2024-06-08 01:01:27.175261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.148 [2024-06-08 01:01:27.175267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.148 [2024-06-08 01:01:27.175281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-06-08 01:01:27.185239] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.148 [2024-06-08 01:01:27.185309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.148 [2024-06-08 01:01:27.185326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.148 [2024-06-08 01:01:27.185333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.148 [2024-06-08 01:01:27.185339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.148 [2024-06-08 01:01:27.185353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.148 qpair failed and we were unable to recover it. 00:36:09.148 [2024-06-08 01:01:27.195250] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.148 [2024-06-08 01:01:27.195326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.195342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.195350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.195356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.195370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.205318] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.205394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.205414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.205422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.205428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.205442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.215283] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.215351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.215371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.215378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.215384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.215398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.225322] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.225393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.225414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.225421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.225428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.225442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.235344] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.235449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.235466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.235474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.235480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.235494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.245458] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.245538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.245559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.245566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.245573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.245589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.255379] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.255450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.255468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.255475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.255486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.255501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.265415] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.265492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.265508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.265515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.265522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.265536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.275459] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.275533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.275550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.275557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.275563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.275577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.285528] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.285606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.285621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.285629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.285636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.285650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.295518] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.295598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.295614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.295622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.295629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.295644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.305421] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.305495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.305511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.305518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.305524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.305539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.315611] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.315698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.315715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.149 [2024-06-08 01:01:27.315722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.149 [2024-06-08 01:01:27.315728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.149 [2024-06-08 01:01:27.315742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.149 qpair failed and we were unable to recover it. 00:36:09.149 [2024-06-08 01:01:27.325656] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.149 [2024-06-08 01:01:27.325739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.149 [2024-06-08 01:01:27.325759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.325767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.325774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.325789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-06-08 01:01:27.335644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.150 [2024-06-08 01:01:27.335725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.150 [2024-06-08 01:01:27.335742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.335749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.335756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.335770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-06-08 01:01:27.345549] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.150 [2024-06-08 01:01:27.345660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.150 [2024-06-08 01:01:27.345677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.345684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.345695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.345709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-06-08 01:01:27.355685] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.150 [2024-06-08 01:01:27.355804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.150 [2024-06-08 01:01:27.355824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.355831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.355837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.355852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-06-08 01:01:27.365796] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.150 [2024-06-08 01:01:27.365872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.150 [2024-06-08 01:01:27.365889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.365896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.365902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.365917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-06-08 01:01:27.375773] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.150 [2024-06-08 01:01:27.375902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.150 [2024-06-08 01:01:27.375918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.375925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.375932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.375946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-06-08 01:01:27.385760] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.150 [2024-06-08 01:01:27.385838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.150 [2024-06-08 01:01:27.385855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.385862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.385869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.385883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-06-08 01:01:27.395763] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.150 [2024-06-08 01:01:27.395837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.150 [2024-06-08 01:01:27.395854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.395862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.395868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.395883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-06-08 01:01:27.405834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.150 [2024-06-08 01:01:27.405928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.150 [2024-06-08 01:01:27.405945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.405952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.405959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.405973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-06-08 01:01:27.415825] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.150 [2024-06-08 01:01:27.415898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.150 [2024-06-08 01:01:27.415914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.415921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.415927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.415942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.150 [2024-06-08 01:01:27.425846] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.150 [2024-06-08 01:01:27.425921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.150 [2024-06-08 01:01:27.425937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.150 [2024-06-08 01:01:27.425944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.150 [2024-06-08 01:01:27.425951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.150 [2024-06-08 01:01:27.425966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.150 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.435878] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.435954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.435971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.435979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.435989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.436004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.445997] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.446072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.446088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.446095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.446102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.446116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.455949] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.456029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.456045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.456052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.456059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.456073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.465938] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.466009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.466025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.466032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.466039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.466054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.475996] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.476070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.476086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.476093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.476100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.476114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.486057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.486142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.486168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.486177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.486184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.486202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.496058] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.496145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.496170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.496179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.496187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.496206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.506106] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.506181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.506199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.506207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.506213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.506229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.516097] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.516173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.516190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.516197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.516204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.516219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.526122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.526206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.526222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.526234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.526241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.526255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.536163] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.536241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.536257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.536265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.536271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.536285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.546174] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.546245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.546261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.412 [2024-06-08 01:01:27.546268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.412 [2024-06-08 01:01:27.546275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.412 [2024-06-08 01:01:27.546289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.412 qpair failed and we were unable to recover it. 00:36:09.412 [2024-06-08 01:01:27.556215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.412 [2024-06-08 01:01:27.556285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.412 [2024-06-08 01:01:27.556302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.556309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.556316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.556330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.566270] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.566351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.566368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.566375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.566382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.566396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.576264] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.576343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.576360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.576367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.576374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.576388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.586263] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.586339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.586355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.586363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.586370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.586384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.596313] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.596388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.596409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.596418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.596424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.596439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.606388] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.606469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.606485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.606493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.606500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.606514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.616399] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.616483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.616499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.616511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.616517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.616531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.626310] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.626388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.626409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.626417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.626424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.626439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.636434] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.636509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.636524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.636532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.636539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.636553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.646501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.646573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.646590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.646597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.646604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.646619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.656453] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.656532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.656549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.656556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.656563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.656577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.666535] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.666610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.666626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.666633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.666640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.666655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.676645] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.676728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.676744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.676752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.676758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.413 [2024-06-08 01:01:27.676772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.413 qpair failed and we were unable to recover it. 00:36:09.413 [2024-06-08 01:01:27.686654] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.413 [2024-06-08 01:01:27.686735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.413 [2024-06-08 01:01:27.686751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.413 [2024-06-08 01:01:27.686760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.413 [2024-06-08 01:01:27.686766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.414 [2024-06-08 01:01:27.686780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.414 qpair failed and we were unable to recover it. 00:36:09.675 [2024-06-08 01:01:27.696610] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.675 [2024-06-08 01:01:27.696689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.675 [2024-06-08 01:01:27.696705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.675 [2024-06-08 01:01:27.696712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.675 [2024-06-08 01:01:27.696720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.675 [2024-06-08 01:01:27.696733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.675 qpair failed and we were unable to recover it. 00:36:09.675 [2024-06-08 01:01:27.706642] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.675 [2024-06-08 01:01:27.706714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.675 [2024-06-08 01:01:27.706734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.675 [2024-06-08 01:01:27.706741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.675 [2024-06-08 01:01:27.706748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.675 [2024-06-08 01:01:27.706762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.675 qpair failed and we were unable to recover it. 00:36:09.675 [2024-06-08 01:01:27.716669] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.675 [2024-06-08 01:01:27.716740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.675 [2024-06-08 01:01:27.716758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.675 [2024-06-08 01:01:27.716766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.675 [2024-06-08 01:01:27.716773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.675 [2024-06-08 01:01:27.716789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.675 qpair failed and we were unable to recover it. 00:36:09.675 [2024-06-08 01:01:27.726741] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.675 [2024-06-08 01:01:27.726825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.675 [2024-06-08 01:01:27.726841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.675 [2024-06-08 01:01:27.726849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.675 [2024-06-08 01:01:27.726855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.675 [2024-06-08 01:01:27.726870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.675 qpair failed and we were unable to recover it. 00:36:09.675 [2024-06-08 01:01:27.736721] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.675 [2024-06-08 01:01:27.736803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.675 [2024-06-08 01:01:27.736819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.675 [2024-06-08 01:01:27.736826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.675 [2024-06-08 01:01:27.736833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.675 [2024-06-08 01:01:27.736847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.675 qpair failed and we were unable to recover it. 00:36:09.675 [2024-06-08 01:01:27.746765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.675 [2024-06-08 01:01:27.746841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.675 [2024-06-08 01:01:27.746857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.675 [2024-06-08 01:01:27.746865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.675 [2024-06-08 01:01:27.746871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.746886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.756852] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.756963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.756980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.756987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.756993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.757007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.766844] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.766919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.766935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.766943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.766949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.766964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.776833] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.776914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.776930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.776937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.776944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.776958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.786842] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.786916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.786932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.786940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.786947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.786961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.796859] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.796929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.796949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.796956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.796963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.796977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.806945] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.807020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.807036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.807043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.807050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.807064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.816986] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.817104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.817122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.817129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.817135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.817153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.826934] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.827015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.827032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.827040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.827046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.827061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.836974] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.837058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.837084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.837093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.837100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.837123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.847006] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.847095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.847120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.847128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.847135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.847154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.857024] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.857108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.857134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.857143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.857151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.857170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.867063] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.867141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.867159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.867167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.867174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.676 [2024-06-08 01:01:27.867189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.676 qpair failed and we were unable to recover it. 00:36:09.676 [2024-06-08 01:01:27.877098] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.676 [2024-06-08 01:01:27.877177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.676 [2024-06-08 01:01:27.877202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.676 [2024-06-08 01:01:27.877211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.676 [2024-06-08 01:01:27.877218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.677 [2024-06-08 01:01:27.877236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.677 qpair failed and we were unable to recover it. 00:36:09.677 [2024-06-08 01:01:27.887056] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.677 [2024-06-08 01:01:27.887142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.677 [2024-06-08 01:01:27.887172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.677 [2024-06-08 01:01:27.887181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.677 [2024-06-08 01:01:27.887188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.677 [2024-06-08 01:01:27.887207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.677 qpair failed and we were unable to recover it. 00:36:09.677 [2024-06-08 01:01:27.897038] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.677 [2024-06-08 01:01:27.897115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.677 [2024-06-08 01:01:27.897134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.677 [2024-06-08 01:01:27.897142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.677 [2024-06-08 01:01:27.897149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.677 [2024-06-08 01:01:27.897164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.677 qpair failed and we were unable to recover it. 00:36:09.677 [2024-06-08 01:01:27.907211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.677 [2024-06-08 01:01:27.907291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.677 [2024-06-08 01:01:27.907308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.677 [2024-06-08 01:01:27.907315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.677 [2024-06-08 01:01:27.907321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.677 [2024-06-08 01:01:27.907336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.677 qpair failed and we were unable to recover it. 00:36:09.677 [2024-06-08 01:01:27.917191] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.677 [2024-06-08 01:01:27.917264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.677 [2024-06-08 01:01:27.917281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.677 [2024-06-08 01:01:27.917288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.677 [2024-06-08 01:01:27.917295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.677 [2024-06-08 01:01:27.917311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.677 qpair failed and we were unable to recover it. 00:36:09.677 [2024-06-08 01:01:27.927249] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.677 [2024-06-08 01:01:27.927343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.677 [2024-06-08 01:01:27.927360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.677 [2024-06-08 01:01:27.927367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.677 [2024-06-08 01:01:27.927374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.677 [2024-06-08 01:01:27.927395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.677 qpair failed and we were unable to recover it. 00:36:09.677 [2024-06-08 01:01:27.937233] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.677 [2024-06-08 01:01:27.937313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.677 [2024-06-08 01:01:27.937329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.677 [2024-06-08 01:01:27.937337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.677 [2024-06-08 01:01:27.937343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.677 [2024-06-08 01:01:27.937357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.677 qpair failed and we were unable to recover it. 00:36:09.677 [2024-06-08 01:01:27.947283] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.677 [2024-06-08 01:01:27.947359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.677 [2024-06-08 01:01:27.947376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.677 [2024-06-08 01:01:27.947383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.677 [2024-06-08 01:01:27.947390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.677 [2024-06-08 01:01:27.947410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.677 qpair failed and we were unable to recover it. 00:36:09.677 [2024-06-08 01:01:27.957323] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:27.957408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:27.957427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:27.957436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:27.957444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:27.957458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:27.967376] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:27.967458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:27.967475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:27.967483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:27.967490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:27.967505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:27.977351] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:27.977430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:27.977450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:27.977457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:27.977464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:27.977478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:27.987374] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:27.987537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:27.987554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:27.987561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:27.987568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:27.987582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:27.997412] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:27.997502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:27.997519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:27.997526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:27.997533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:27.997547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:28.007480] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:28.007560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:28.007577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:28.007584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:28.007590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:28.007604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:28.017463] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:28.017544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:28.017561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:28.017568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:28.017578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:28.017593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:28.027521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:28.027600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:28.027616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:28.027624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:28.027631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:28.027645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:28.037547] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:28.037640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:28.037657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:28.037664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:28.037670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:28.037684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:28.047614] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:28.047708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:28.047724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:28.047731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:28.047737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:28.047753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:28.057615] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:28.057736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:28.057753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:28.057760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:28.057767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.940 [2024-06-08 01:01:28.057781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.940 qpair failed and we were unable to recover it. 00:36:09.940 [2024-06-08 01:01:28.067605] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.940 [2024-06-08 01:01:28.067687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.940 [2024-06-08 01:01:28.067703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.940 [2024-06-08 01:01:28.067711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.940 [2024-06-08 01:01:28.067718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.067732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.077670] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.077766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.077783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.077791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.077797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.077811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.087725] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.087805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.087822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.087829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.087836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.087850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.097702] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.097781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.097798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.097805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.097811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.097825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.107721] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.107801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.107817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.107825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.107835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.107849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.117765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.117837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.117853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.117860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.117867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.117881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.127703] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.127780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.127796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.127803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.127810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.127824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.137791] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.137871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.137887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.137895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.137904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.137918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.147782] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.147858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.147874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.147882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.147890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.147904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.157889] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.157963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.157980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.157987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.157994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.158009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.167923] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.168001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.168017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.168025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.168031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.168045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.177945] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.178024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.178040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.178047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.178054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.178068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.187905] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.187979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.187995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.188002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.188009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.188023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.197981] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.941 [2024-06-08 01:01:28.198052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.941 [2024-06-08 01:01:28.198068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.941 [2024-06-08 01:01:28.198076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.941 [2024-06-08 01:01:28.198086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.941 [2024-06-08 01:01:28.198100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.941 qpair failed and we were unable to recover it. 00:36:09.941 [2024-06-08 01:01:28.208003] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.942 [2024-06-08 01:01:28.208082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.942 [2024-06-08 01:01:28.208098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.942 [2024-06-08 01:01:28.208105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.942 [2024-06-08 01:01:28.208112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.942 [2024-06-08 01:01:28.208125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.942 qpair failed and we were unable to recover it. 00:36:09.942 [2024-06-08 01:01:28.218119] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.942 [2024-06-08 01:01:28.218215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.942 [2024-06-08 01:01:28.218240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.942 [2024-06-08 01:01:28.218249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.942 [2024-06-08 01:01:28.218256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:09.942 [2024-06-08 01:01:28.218276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.942 qpair failed and we were unable to recover it. 00:36:10.203 [2024-06-08 01:01:28.228076] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.203 [2024-06-08 01:01:28.228194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.203 [2024-06-08 01:01:28.228221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.228231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.228238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.228258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.238066] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.238169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.238189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.238198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.238205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.238222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.248176] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.248257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.248275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.248282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.248289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.248304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.258144] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.258225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.258242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.258251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.258258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.258273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.268141] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.268233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.268250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.268257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.268263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.268278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.278173] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.278253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.278270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.278278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.278285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.278299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.288244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.288323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.288339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.288350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.288357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.288371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.298223] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.298342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.298359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.298366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.298373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.298387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.308259] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.308331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.308347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.308354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.308361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.308375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.318288] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.318361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.318377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.318384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.318390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.318410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.328367] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.328450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.328467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.328475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.328481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.328496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.338343] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.338430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.338447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.338455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.338462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.338477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.348372] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.348439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.348456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.348463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.348469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.204 [2024-06-08 01:01:28.348484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.204 qpair failed and we were unable to recover it. 00:36:10.204 [2024-06-08 01:01:28.358433] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.204 [2024-06-08 01:01:28.358508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.204 [2024-06-08 01:01:28.358525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.204 [2024-06-08 01:01:28.358532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.204 [2024-06-08 01:01:28.358538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.358553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.368433] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.368528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.368544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.368551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.368557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.368572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.378485] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.378577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.378593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.378604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.378610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.378624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.388512] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.388586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.388602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.388609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.388616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.388630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.398535] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.398606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.398622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.398629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.398635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.398650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.408490] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.408571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.408587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.408594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.408600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.408614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.418479] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.418556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.418573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.418581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.418587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.418602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.428640] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.428727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.428746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.428753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.428760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.428776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.438526] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.438597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.438614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.438621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.438628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.438642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.448700] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.448777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.448793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.448800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.448807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.448822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.458717] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.458799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.458816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.458825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.458831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.458845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.468734] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.468811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.468827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.468839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.468845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.468860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.205 [2024-06-08 01:01:28.478631] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.205 [2024-06-08 01:01:28.478694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.205 [2024-06-08 01:01:28.478711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.205 [2024-06-08 01:01:28.478718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.205 [2024-06-08 01:01:28.478724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.205 [2024-06-08 01:01:28.478738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.205 qpair failed and we were unable to recover it. 00:36:10.468 [2024-06-08 01:01:28.488701] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-06-08 01:01:28.488814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.468 [2024-06-08 01:01:28.488831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.468 [2024-06-08 01:01:28.488839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.468 [2024-06-08 01:01:28.488845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.468 [2024-06-08 01:01:28.488860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.468 qpair failed and we were unable to recover it. 00:36:10.468 [2024-06-08 01:01:28.498806] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-06-08 01:01:28.498881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.468 [2024-06-08 01:01:28.498898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.468 [2024-06-08 01:01:28.498905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.468 [2024-06-08 01:01:28.498912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.468 [2024-06-08 01:01:28.498927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.468 qpair failed and we were unable to recover it. 00:36:10.468 [2024-06-08 01:01:28.508807] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-06-08 01:01:28.508878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.468 [2024-06-08 01:01:28.508895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.468 [2024-06-08 01:01:28.508902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.468 [2024-06-08 01:01:28.508909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.468 [2024-06-08 01:01:28.508923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.468 qpair failed and we were unable to recover it. 00:36:10.468 [2024-06-08 01:01:28.518845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-06-08 01:01:28.518919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.468 [2024-06-08 01:01:28.518936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.468 [2024-06-08 01:01:28.518943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.468 [2024-06-08 01:01:28.518950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.468 [2024-06-08 01:01:28.518964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.468 qpair failed and we were unable to recover it. 00:36:10.468 [2024-06-08 01:01:28.528864] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-06-08 01:01:28.528946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.468 [2024-06-08 01:01:28.528962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.468 [2024-06-08 01:01:28.528970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.468 [2024-06-08 01:01:28.528976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.468 [2024-06-08 01:01:28.528991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.468 qpair failed and we were unable to recover it. 00:36:10.468 [2024-06-08 01:01:28.538920] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-06-08 01:01:28.539004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.468 [2024-06-08 01:01:28.539021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.468 [2024-06-08 01:01:28.539028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.468 [2024-06-08 01:01:28.539034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.468 [2024-06-08 01:01:28.539049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.468 qpair failed and we were unable to recover it. 00:36:10.468 [2024-06-08 01:01:28.548932] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-06-08 01:01:28.549046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.468 [2024-06-08 01:01:28.549063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.468 [2024-06-08 01:01:28.549070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.468 [2024-06-08 01:01:28.549076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.468 [2024-06-08 01:01:28.549091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.468 qpair failed and we were unable to recover it. 00:36:10.468 [2024-06-08 01:01:28.558959] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-06-08 01:01:28.559037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.468 [2024-06-08 01:01:28.559057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.468 [2024-06-08 01:01:28.559065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.468 [2024-06-08 01:01:28.559071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.468 [2024-06-08 01:01:28.559086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.468 qpair failed and we were unable to recover it. 00:36:10.468 [2024-06-08 01:01:28.569053] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-06-08 01:01:28.569129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.468 [2024-06-08 01:01:28.569146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.468 [2024-06-08 01:01:28.569154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.468 [2024-06-08 01:01:28.569160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.468 [2024-06-08 01:01:28.569175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.468 qpair failed and we were unable to recover it. 00:36:10.468 [2024-06-08 01:01:28.579010] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.468 [2024-06-08 01:01:28.579095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.579120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.579129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.579136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.579155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.589084] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.589193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.589218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.589227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.589234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.589253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.599099] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.599181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.599207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.599216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.599223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.599246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.609181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.609267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.609293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.609302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.609308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.609327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.619131] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.619211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.619228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.619236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.619243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.619258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.629164] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.629234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.629250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.629258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.629265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.629280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.639156] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.639229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.639245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.639253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.639260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.639274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.649237] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.649317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.649338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.649346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.649352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.649366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.659284] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.659420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.659438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.659446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.659453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.659467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.669239] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.669307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.669324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.669331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.669337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.669353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.679278] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.679354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.679371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.679378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.679385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.679400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.689340] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.689427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.689443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.689451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.689457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.689478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.699344] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.699464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.699481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.469 [2024-06-08 01:01:28.699489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.469 [2024-06-08 01:01:28.699495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.469 [2024-06-08 01:01:28.699509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.469 qpair failed and we were unable to recover it. 00:36:10.469 [2024-06-08 01:01:28.709348] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.469 [2024-06-08 01:01:28.709427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.469 [2024-06-08 01:01:28.709443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.470 [2024-06-08 01:01:28.709451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.470 [2024-06-08 01:01:28.709458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.470 [2024-06-08 01:01:28.709472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.470 qpair failed and we were unable to recover it. 00:36:10.470 [2024-06-08 01:01:28.719282] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.470 [2024-06-08 01:01:28.719380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.470 [2024-06-08 01:01:28.719399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.470 [2024-06-08 01:01:28.719413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.470 [2024-06-08 01:01:28.719424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.470 [2024-06-08 01:01:28.719441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.470 qpair failed and we were unable to recover it. 00:36:10.470 [2024-06-08 01:01:28.729504] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.470 [2024-06-08 01:01:28.729584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.470 [2024-06-08 01:01:28.729601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.470 [2024-06-08 01:01:28.729608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.470 [2024-06-08 01:01:28.729615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.470 [2024-06-08 01:01:28.729630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.470 qpair failed and we were unable to recover it. 00:36:10.470 [2024-06-08 01:01:28.739425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.470 [2024-06-08 01:01:28.739509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.470 [2024-06-08 01:01:28.739533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.470 [2024-06-08 01:01:28.739541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.470 [2024-06-08 01:01:28.739547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.470 [2024-06-08 01:01:28.739563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.470 qpair failed and we were unable to recover it. 00:36:10.470 [2024-06-08 01:01:28.749442] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.470 [2024-06-08 01:01:28.749515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.470 [2024-06-08 01:01:28.749531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.470 [2024-06-08 01:01:28.749539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.470 [2024-06-08 01:01:28.749545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.470 [2024-06-08 01:01:28.749561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.470 qpair failed and we were unable to recover it. 00:36:10.731 [2024-06-08 01:01:28.759540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.731 [2024-06-08 01:01:28.759619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.731 [2024-06-08 01:01:28.759635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.731 [2024-06-08 01:01:28.759644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.731 [2024-06-08 01:01:28.759651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.731 [2024-06-08 01:01:28.759666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-06-08 01:01:28.769556] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.731 [2024-06-08 01:01:28.769636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.731 [2024-06-08 01:01:28.769653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.731 [2024-06-08 01:01:28.769660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.731 [2024-06-08 01:01:28.769667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.731 [2024-06-08 01:01:28.769681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.731 qpair failed and we were unable to recover it. 00:36:10.731 [2024-06-08 01:01:28.779434] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.731 [2024-06-08 01:01:28.779510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.731 [2024-06-08 01:01:28.779527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.779534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.779542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.779560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.789562] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.789658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.789676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.789684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.789690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.789710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.799606] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.799680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.799697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.799705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.799712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.799726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.809663] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.809738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.809754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.809761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.809768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.809782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.819639] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.819717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.819734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.819741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.819749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.819763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.829662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.829738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.829763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.829772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.829778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.829793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.839717] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.839788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.839804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.839812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.839819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.839833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.849760] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.849839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.849855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.849862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.849869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.849883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.859754] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.859831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.859848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.859855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.859862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.859877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.869757] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.869841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.869857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.869865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.869874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.869889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.879814] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.879882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.879898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.879906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.879913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.879927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.889876] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.889967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.889984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.889991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.889997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.890012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.899878] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.899959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.899975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.899982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.899989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.900003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.732 [2024-06-08 01:01:28.909904] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.732 [2024-06-08 01:01:28.909976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.732 [2024-06-08 01:01:28.909992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.732 [2024-06-08 01:01:28.910000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.732 [2024-06-08 01:01:28.910007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.732 [2024-06-08 01:01:28.910021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.732 qpair failed and we were unable to recover it. 00:36:10.733 [2024-06-08 01:01:28.919908] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.733 [2024-06-08 01:01:28.919986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.733 [2024-06-08 01:01:28.920002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.733 [2024-06-08 01:01:28.920009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.733 [2024-06-08 01:01:28.920016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.733 [2024-06-08 01:01:28.920030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-06-08 01:01:28.930020] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.733 [2024-06-08 01:01:28.930125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.733 [2024-06-08 01:01:28.930142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.733 [2024-06-08 01:01:28.930149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.733 [2024-06-08 01:01:28.930156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.733 [2024-06-08 01:01:28.930170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-06-08 01:01:28.939995] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.733 [2024-06-08 01:01:28.940085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.733 [2024-06-08 01:01:28.940110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.733 [2024-06-08 01:01:28.940119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.733 [2024-06-08 01:01:28.940126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.733 [2024-06-08 01:01:28.940145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-06-08 01:01:28.949992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.733 [2024-06-08 01:01:28.950069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.733 [2024-06-08 01:01:28.950093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.733 [2024-06-08 01:01:28.950103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.733 [2024-06-08 01:01:28.950109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.733 [2024-06-08 01:01:28.950128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-06-08 01:01:28.960065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.733 [2024-06-08 01:01:28.960148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.733 [2024-06-08 01:01:28.960173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.733 [2024-06-08 01:01:28.960182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.733 [2024-06-08 01:01:28.960193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.733 [2024-06-08 01:01:28.960213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-06-08 01:01:28.970078] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.733 [2024-06-08 01:01:28.970164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.733 [2024-06-08 01:01:28.970189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.733 [2024-06-08 01:01:28.970198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.733 [2024-06-08 01:01:28.970205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.733 [2024-06-08 01:01:28.970223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-06-08 01:01:28.980115] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.733 [2024-06-08 01:01:28.980218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.733 [2024-06-08 01:01:28.980236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.733 [2024-06-08 01:01:28.980244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.733 [2024-06-08 01:01:28.980251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.733 [2024-06-08 01:01:28.980266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-06-08 01:01:28.990103] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.733 [2024-06-08 01:01:28.990177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.733 [2024-06-08 01:01:28.990193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.733 [2024-06-08 01:01:28.990200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.733 [2024-06-08 01:01:28.990207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.733 [2024-06-08 01:01:28.990222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-06-08 01:01:29.000162] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.733 [2024-06-08 01:01:29.000238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.733 [2024-06-08 01:01:29.000254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.733 [2024-06-08 01:01:29.000262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.733 [2024-06-08 01:01:29.000269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.733 [2024-06-08 01:01:29.000283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.733 [2024-06-08 01:01:29.010188] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.733 [2024-06-08 01:01:29.010269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.733 [2024-06-08 01:01:29.010286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.733 [2024-06-08 01:01:29.010294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.733 [2024-06-08 01:01:29.010300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.733 [2024-06-08 01:01:29.010315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.733 qpair failed and we were unable to recover it. 00:36:10.995 [2024-06-08 01:01:29.020179] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.995 [2024-06-08 01:01:29.020258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.995 [2024-06-08 01:01:29.020274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.995 [2024-06-08 01:01:29.020281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.995 [2024-06-08 01:01:29.020288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.995 [2024-06-08 01:01:29.020303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-06-08 01:01:29.030198] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.995 [2024-06-08 01:01:29.030273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.995 [2024-06-08 01:01:29.030290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.995 [2024-06-08 01:01:29.030297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.995 [2024-06-08 01:01:29.030304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.995 [2024-06-08 01:01:29.030318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-06-08 01:01:29.040199] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.995 [2024-06-08 01:01:29.040273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.995 [2024-06-08 01:01:29.040289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.995 [2024-06-08 01:01:29.040297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.995 [2024-06-08 01:01:29.040304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.995 [2024-06-08 01:01:29.040318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-06-08 01:01:29.050301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.995 [2024-06-08 01:01:29.050379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.995 [2024-06-08 01:01:29.050395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.995 [2024-06-08 01:01:29.050414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.995 [2024-06-08 01:01:29.050422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.995 [2024-06-08 01:01:29.050437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-06-08 01:01:29.060285] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.995 [2024-06-08 01:01:29.060363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.995 [2024-06-08 01:01:29.060380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.995 [2024-06-08 01:01:29.060387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.995 [2024-06-08 01:01:29.060394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.995 [2024-06-08 01:01:29.060412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-06-08 01:01:29.070334] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.995 [2024-06-08 01:01:29.070414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.995 [2024-06-08 01:01:29.070430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.995 [2024-06-08 01:01:29.070439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.995 [2024-06-08 01:01:29.070445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.995 [2024-06-08 01:01:29.070460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.080246] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.080321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.080337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.080345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.080352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.080366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.090392] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.090475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.090492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.090500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.090507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.090521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.100418] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.100499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.100515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.100523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.100530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.100544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.110425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.110499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.110515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.110522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.110529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.110543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.120488] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.120563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.120579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.120586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.120593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.120608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.130536] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.130614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.130630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.130638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.130645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.130659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.140533] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.140659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.140676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.140686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.140693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.140708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.150550] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.150614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.150631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.150639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.150645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.150660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.160574] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.160656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.160672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.160680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.160687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.160701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.170679] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.170760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.170776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.170783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.170791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.170805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.180635] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.180712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.180728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.180736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.180743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.180757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.190648] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.190722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.190739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.190747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.190753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.190767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.200708] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.200780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.200797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.996 [2024-06-08 01:01:29.200804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.996 [2024-06-08 01:01:29.200811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.996 [2024-06-08 01:01:29.200825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-06-08 01:01:29.210742] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.996 [2024-06-08 01:01:29.210814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.996 [2024-06-08 01:01:29.210831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.997 [2024-06-08 01:01:29.210838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.997 [2024-06-08 01:01:29.210845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.997 [2024-06-08 01:01:29.210859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-06-08 01:01:29.220724] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.997 [2024-06-08 01:01:29.220795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.997 [2024-06-08 01:01:29.220811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.997 [2024-06-08 01:01:29.220818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.997 [2024-06-08 01:01:29.220824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.997 [2024-06-08 01:01:29.220839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-06-08 01:01:29.230755] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.997 [2024-06-08 01:01:29.230826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.997 [2024-06-08 01:01:29.230842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.997 [2024-06-08 01:01:29.230853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.997 [2024-06-08 01:01:29.230860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.997 [2024-06-08 01:01:29.230874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-06-08 01:01:29.240748] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.997 [2024-06-08 01:01:29.240816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.997 [2024-06-08 01:01:29.240833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.997 [2024-06-08 01:01:29.240841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.997 [2024-06-08 01:01:29.240848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.997 [2024-06-08 01:01:29.240861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-06-08 01:01:29.250813] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.997 [2024-06-08 01:01:29.250925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.997 [2024-06-08 01:01:29.250941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.997 [2024-06-08 01:01:29.250948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.997 [2024-06-08 01:01:29.250955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.997 [2024-06-08 01:01:29.250969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-06-08 01:01:29.260838] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.997 [2024-06-08 01:01:29.260915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.997 [2024-06-08 01:01:29.260932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.997 [2024-06-08 01:01:29.260939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.997 [2024-06-08 01:01:29.260946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.997 [2024-06-08 01:01:29.260960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-06-08 01:01:29.270874] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.997 [2024-06-08 01:01:29.270946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.997 [2024-06-08 01:01:29.270962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.997 [2024-06-08 01:01:29.270969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.997 [2024-06-08 01:01:29.270976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:10.997 [2024-06-08 01:01:29.270990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.997 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.280894] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.280966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.280982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.280990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.280996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.281011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.290993] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.291146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.291163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.291170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.291176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.291190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.300949] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.301032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.301057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.301065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.301072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.301090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.310974] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.311046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.311071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.311080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.311087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.311106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.321021] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.321095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.321117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.321125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.321133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.321148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.331029] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.331107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.331132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.331141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.331147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.331166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.340966] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.341043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.341061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.341068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.341075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.341089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.351081] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.351161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.351186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.351196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.351202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.351221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.361126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.361207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.361231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.361240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.361247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.361266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.371148] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.371241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.371259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.371267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.371273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.371289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.381194] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.381268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.381285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.381293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.381299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.381314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.391190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.391263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.391280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.391287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.391294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.391308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 [2024-06-08 01:01:29.401234] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.259 [2024-06-08 01:01:29.401306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.259 [2024-06-08 01:01:29.401322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.259 [2024-06-08 01:01:29.401330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.259 [2024-06-08 01:01:29.401336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x107b270 00:36:11.259 [2024-06-08 01:01:29.401350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.259 qpair failed and we were unable to recover it. 00:36:11.259 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 [2024-06-08 01:01:29.402174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:11.260 [2024-06-08 01:01:29.411387] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.260 [2024-06-08 01:01:29.411607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.260 [2024-06-08 01:01:29.411674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.260 [2024-06-08 01:01:29.411701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.260 [2024-06-08 01:01:29.411720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf3c000b90 00:36:11.260 [2024-06-08 01:01:29.411773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:11.260 qpair failed and we were unable to recover it. 00:36:11.260 [2024-06-08 01:01:29.421408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.260 [2024-06-08 01:01:29.421565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.260 [2024-06-08 01:01:29.421599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.260 [2024-06-08 01:01:29.421615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.260 [2024-06-08 01:01:29.421628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbf3c000b90 00:36:11.260 [2024-06-08 01:01:29.421659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:11.260 qpair failed and we were unable to recover it. 00:36:11.260 [2024-06-08 01:01:29.421825] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:11.260 A controller has encountered a failure and is being reset. 00:36:11.260 [2024-06-08 01:01:29.421926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1088e30 (9): Bad file descriptor 00:36:11.260 Controller properly reset. 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Read completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 Write completed with error (sct=0, sc=8) 00:36:11.260 starting I/O failed 00:36:11.260 [2024-06-08 01:01:29.441942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.260 Initializing NVMe Controllers 00:36:11.260 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:11.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:11.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:11.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:11.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:11.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:11.260 Initialization complete. Launching workers. 00:36:11.260 Starting thread on core 1 00:36:11.260 Starting thread on core 2 00:36:11.260 Starting thread on core 3 00:36:11.260 Starting thread on core 0 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:11.260 00:36:11.260 real 0m11.347s 00:36:11.260 user 0m20.884s 00:36:11.260 sys 0m3.898s 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.260 ************************************ 00:36:11.260 END TEST nvmf_target_disconnect_tc2 00:36:11.260 ************************************ 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:11.260 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:11.260 rmmod nvme_tcp 00:36:11.260 rmmod nvme_fabrics 00:36:11.521 rmmod nvme_keyring 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 678372 ']' 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 678372 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 678372 ']' 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 678372 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 678372 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 678372' 00:36:11.521 killing process with pid 678372 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 678372 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 678372 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:11.521 01:01:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.067 01:01:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:14.067 00:36:14.067 real 0m20.751s 00:36:14.067 user 0m48.459s 00:36:14.067 sys 0m9.210s 00:36:14.067 01:01:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:14.067 01:01:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:14.067 ************************************ 00:36:14.067 END TEST nvmf_target_disconnect 00:36:14.067 ************************************ 00:36:14.067 01:01:31 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:36:14.067 01:01:31 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:14.067 01:01:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.067 01:01:31 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:14.067 00:36:14.067 real 29m15.188s 00:36:14.067 user 74m9.565s 00:36:14.067 sys 7m51.110s 00:36:14.067 01:01:31 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:14.067 01:01:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.067 ************************************ 00:36:14.067 END TEST nvmf_tcp 00:36:14.067 ************************************ 00:36:14.067 01:01:31 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:36:14.067 01:01:31 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:14.067 01:01:31 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:14.067 01:01:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:14.067 01:01:31 -- common/autotest_common.sh@10 -- # set +x 00:36:14.067 ************************************ 00:36:14.067 START TEST spdkcli_nvmf_tcp 00:36:14.067 ************************************ 00:36:14.067 01:01:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:14.067 * Looking for test storage... 00:36:14.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:14.067 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=680244 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 680244 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 680244 ']' 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.068 01:01:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:14.068 [2024-06-08 01:01:32.171922] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:36:14.068 [2024-06-08 01:01:32.171987] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid680244 ] 00:36:14.068 EAL: No free 2048 kB hugepages reported on node 1 00:36:14.068 [2024-06-08 01:01:32.235869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:14.068 [2024-06-08 01:01:32.311324] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.068 [2024-06-08 01:01:32.311328] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.013 01:01:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:15.013 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:15.013 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:15.013 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:15.013 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:15.013 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:15.013 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:15.013 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:15.013 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:15.013 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:15.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:15.013 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:15.013 ' 00:36:17.556 [2024-06-08 01:01:35.300003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.496 [2024-06-08 01:01:36.463725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:20.407 [2024-06-08 01:01:38.601986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:22.322 [2024-06-08 01:01:40.435464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:23.707 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:23.707 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:23.707 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:23.707 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:23.707 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:23.707 01:01:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:23.707 01:01:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:23.707 01:01:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:23.968 01:01:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:23.968 01:01:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:23.968 01:01:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:23.968 01:01:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:23.968 01:01:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:24.230 01:01:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:24.230 01:01:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:24.230 01:01:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:24.230 01:01:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:24.230 01:01:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:24.230 01:01:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:24.230 01:01:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:24.230 01:01:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:24.230 01:01:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:24.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:24.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:24.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:24.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:24.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:24.230 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:24.230 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:24.230 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:24.230 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:24.230 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:24.230 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:24.230 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:24.230 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:24.230 ' 00:36:29.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:29.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:29.554 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:29.554 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:29.554 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:29.554 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:29.554 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:29.554 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:29.554 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:29.554 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:29.554 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:29.554 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:29.554 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:29.554 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:29.554 01:01:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:29.554 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:29.554 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 680244 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 680244 ']' 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 680244 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 680244 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 680244' 00:36:29.815 killing process with pid 680244 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 680244 00:36:29.815 01:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 680244 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 680244 ']' 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 680244 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 680244 ']' 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 680244 00:36:29.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (680244) - No such process 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 680244 is not found' 00:36:29.815 Process with pid 680244 is not found 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:29.815 00:36:29.815 real 0m16.064s 00:36:29.815 user 0m33.759s 00:36:29.815 sys 0m0.781s 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:29.815 01:01:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:29.815 ************************************ 00:36:29.815 END TEST spdkcli_nvmf_tcp 00:36:29.815 ************************************ 00:36:29.815 01:01:48 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:29.815 01:01:48 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:29.815 01:01:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:29.815 01:01:48 -- common/autotest_common.sh@10 -- # set +x 00:36:30.077 ************************************ 00:36:30.077 START TEST nvmf_identify_passthru 00:36:30.077 ************************************ 00:36:30.077 01:01:48 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:30.077 * Looking for test storage... 00:36:30.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:30.077 01:01:48 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.077 01:01:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.077 01:01:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.077 01:01:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.077 01:01:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.077 01:01:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.077 01:01:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.077 01:01:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:30.077 01:01:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:36:30.077 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:30.078 01:01:48 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.078 01:01:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.078 01:01:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.078 01:01:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.078 01:01:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.078 01:01:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.078 01:01:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.078 01:01:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:30.078 01:01:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.078 01:01:48 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.078 01:01:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:30.078 01:01:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:30.078 01:01:48 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:36:30.078 01:01:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:36.685 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:36.685 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:36.685 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:36.685 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:36.685 01:01:54 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:36.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:36.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:36:36.946 00:36:36.946 --- 10.0.0.2 ping statistics --- 00:36:36.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.946 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:36.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:36.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:36:36.946 00:36:36.946 --- 10.0.0.1 ping statistics --- 00:36:36.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.946 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:36.946 01:01:55 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:36.946 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.946 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:36.946 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:36:37.207 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:36:37.207 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:36:37.207 01:01:55 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:65:00.0 00:36:37.207 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:37.207 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:37.207 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:37.207 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:37.207 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:37.207 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.780 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:37.780 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:37.780 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:37.780 01:01:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:37.780 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.041 01:01:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:38.041 01:01:56 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:38.041 01:01:56 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:38.041 01:01:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.041 01:01:56 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:38.041 01:01:56 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:38.041 01:01:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.041 01:01:56 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=687009 00:36:38.041 01:01:56 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:38.041 01:01:56 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:38.041 01:01:56 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 687009 00:36:38.041 01:01:56 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 687009 ']' 00:36:38.041 01:01:56 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.041 01:01:56 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:38.041 01:01:56 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.041 01:01:56 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:38.041 01:01:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.302 [2024-06-08 01:01:56.345955] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:36:38.302 [2024-06-08 01:01:56.346004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:38.302 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.302 [2024-06-08 01:01:56.408277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:38.302 [2024-06-08 01:01:56.473942] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:38.302 [2024-06-08 01:01:56.473976] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:38.302 [2024-06-08 01:01:56.473984] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:38.302 [2024-06-08 01:01:56.473990] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:38.302 [2024-06-08 01:01:56.473996] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:38.302 [2024-06-08 01:01:56.474128] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.302 [2024-06-08 01:01:56.474257] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:36:38.302 [2024-06-08 01:01:56.474434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:36:38.302 [2024-06-08 01:01:56.474451] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.873 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:38.873 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:36:38.873 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:38.873 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.873 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.873 INFO: Log level set to 20 00:36:38.873 INFO: Requests: 00:36:38.873 { 00:36:38.873 "jsonrpc": "2.0", 00:36:38.873 "method": "nvmf_set_config", 00:36:38.873 "id": 1, 00:36:38.873 "params": { 00:36:38.873 "admin_cmd_passthru": { 00:36:38.873 "identify_ctrlr": true 00:36:38.873 } 00:36:38.873 } 00:36:38.873 } 00:36:38.874 00:36:38.874 INFO: response: 00:36:38.874 { 00:36:38.874 "jsonrpc": "2.0", 00:36:38.874 "id": 1, 00:36:38.874 "result": true 00:36:38.874 } 00:36:38.874 00:36:38.874 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.874 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:38.874 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.874 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.874 INFO: Setting log level to 20 00:36:38.874 INFO: Setting log level to 20 00:36:38.874 INFO: Log level set to 20 00:36:38.874 INFO: Log level set to 20 00:36:38.874 INFO: Requests: 00:36:38.874 { 00:36:38.874 "jsonrpc": "2.0", 00:36:38.874 "method": "framework_start_init", 00:36:38.874 "id": 1 00:36:38.874 } 00:36:38.874 00:36:38.874 INFO: Requests: 00:36:38.874 { 00:36:38.874 "jsonrpc": "2.0", 00:36:38.874 "method": "framework_start_init", 00:36:38.874 "id": 1 00:36:38.874 } 00:36:38.874 00:36:39.135 [2024-06-08 01:01:57.194823] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:39.135 INFO: response: 00:36:39.135 { 00:36:39.135 "jsonrpc": "2.0", 00:36:39.135 "id": 1, 00:36:39.135 "result": true 00:36:39.135 } 00:36:39.135 00:36:39.135 INFO: response: 00:36:39.135 { 00:36:39.135 "jsonrpc": "2.0", 00:36:39.135 "id": 1, 00:36:39.135 "result": true 00:36:39.135 } 00:36:39.135 00:36:39.135 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.135 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:39.135 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.135 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.135 INFO: Setting log level to 40 00:36:39.135 INFO: Setting log level to 40 00:36:39.135 INFO: Setting log level to 40 00:36:39.135 [2024-06-08 01:01:57.208068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.135 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.135 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:39.135 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:39.135 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.135 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:39.135 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.135 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.395 Nvme0n1 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.395 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.395 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.395 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.395 [2024-06-08 01:01:57.589685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.395 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.395 [ 00:36:39.395 { 00:36:39.395 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:39.395 "subtype": "Discovery", 00:36:39.395 "listen_addresses": [], 00:36:39.395 "allow_any_host": true, 00:36:39.395 "hosts": [] 00:36:39.395 }, 00:36:39.395 { 00:36:39.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:39.395 "subtype": "NVMe", 00:36:39.395 "listen_addresses": [ 00:36:39.395 { 00:36:39.395 "trtype": "TCP", 00:36:39.395 "adrfam": "IPv4", 00:36:39.395 "traddr": "10.0.0.2", 00:36:39.395 "trsvcid": "4420" 00:36:39.395 } 00:36:39.395 ], 00:36:39.395 "allow_any_host": true, 00:36:39.395 "hosts": [], 00:36:39.395 "serial_number": "SPDK00000000000001", 00:36:39.395 "model_number": "SPDK bdev Controller", 00:36:39.395 "max_namespaces": 1, 00:36:39.395 "min_cntlid": 1, 00:36:39.395 "max_cntlid": 65519, 00:36:39.395 "namespaces": [ 00:36:39.395 { 00:36:39.395 "nsid": 1, 00:36:39.395 "bdev_name": "Nvme0n1", 00:36:39.395 "name": "Nvme0n1", 00:36:39.395 "nguid": "3634473052605487002538450000003E", 00:36:39.395 "uuid": "36344730-5260-5487-0025-38450000003e" 00:36:39.395 } 00:36:39.395 ] 00:36:39.395 } 00:36:39.395 ] 00:36:39.395 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.395 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:39.395 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:39.395 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:39.395 EAL: No free 2048 kB hugepages reported on node 1 00:36:39.655 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:39.655 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:39.655 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:39.655 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:39.655 EAL: No free 2048 kB hugepages reported on node 1 00:36:39.916 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:39.916 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:39.916 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:39.916 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:39.916 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.916 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.916 01:01:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.916 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:39.916 01:01:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:39.916 01:01:57 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:39.916 01:01:57 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:36:39.916 01:01:57 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:39.916 01:01:57 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:36:39.916 01:01:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:39.916 01:01:57 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:39.916 rmmod nvme_tcp 00:36:39.916 rmmod nvme_fabrics 00:36:39.916 rmmod nvme_keyring 00:36:39.916 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:39.916 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:36:39.916 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:36:39.916 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 687009 ']' 00:36:39.916 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 687009 00:36:39.916 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 687009 ']' 00:36:39.916 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 687009 00:36:39.916 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:36:39.916 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:39.916 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 687009 00:36:39.916 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:39.916 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:39.916 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 687009' 00:36:39.916 killing process with pid 687009 00:36:39.916 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 687009 00:36:39.916 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 687009 00:36:40.177 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:40.177 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:40.177 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:40.177 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:40.177 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:40.177 01:01:58 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.177 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:40.177 01:01:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.724 01:02:00 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:42.724 00:36:42.724 real 0m12.315s 00:36:42.724 user 0m9.848s 00:36:42.724 sys 0m5.846s 00:36:42.724 01:02:00 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:42.724 01:02:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.724 ************************************ 00:36:42.724 END TEST nvmf_identify_passthru 00:36:42.724 ************************************ 00:36:42.724 01:02:00 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:42.724 01:02:00 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:36:42.724 01:02:00 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:42.724 01:02:00 -- common/autotest_common.sh@10 -- # set +x 00:36:42.724 ************************************ 00:36:42.724 START TEST nvmf_dif 00:36:42.724 ************************************ 00:36:42.724 01:02:00 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:42.724 * Looking for test storage... 00:36:42.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:42.724 01:02:00 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:42.724 01:02:00 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.724 01:02:00 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.724 01:02:00 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.724 01:02:00 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.724 01:02:00 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.724 01:02:00 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.724 01:02:00 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:42.724 01:02:00 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:42.724 01:02:00 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:42.725 01:02:00 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:42.725 01:02:00 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:42.725 01:02:00 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:42.725 01:02:00 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:42.725 01:02:00 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.725 01:02:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:42.725 01:02:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:42.725 01:02:00 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:36:42.725 01:02:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:49.311 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:49.311 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:49.311 01:02:07 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:49.312 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:49.312 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:49.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:49.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:36:49.312 00:36:49.312 --- 10.0.0.2 ping statistics --- 00:36:49.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.312 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:49.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:49.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:36:49.312 00:36:49.312 --- 10.0.0.1 ping statistics --- 00:36:49.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.312 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:49.312 01:02:07 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:51.884 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:51.884 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:51.884 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:52.145 01:02:10 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:52.145 01:02:10 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:52.145 01:02:10 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:52.145 01:02:10 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:52.145 01:02:10 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:52.145 01:02:10 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:52.145 01:02:10 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:52.145 01:02:10 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:52.145 01:02:10 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:52.145 01:02:10 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:52.145 01:02:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:52.145 01:02:10 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=692846 00:36:52.145 01:02:10 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 692846 00:36:52.145 01:02:10 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:52.145 01:02:10 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 692846 ']' 00:36:52.145 01:02:10 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:52.145 01:02:10 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:52.145 01:02:10 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:52.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:52.145 01:02:10 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:52.145 01:02:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:52.145 [2024-06-08 01:02:10.394635] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:36:52.145 [2024-06-08 01:02:10.394692] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:52.406 EAL: No free 2048 kB hugepages reported on node 1 00:36:52.406 [2024-06-08 01:02:10.464109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.406 [2024-06-08 01:02:10.538521] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:52.406 [2024-06-08 01:02:10.538561] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:52.406 [2024-06-08 01:02:10.538571] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:52.406 [2024-06-08 01:02:10.538579] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:52.406 [2024-06-08 01:02:10.538586] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:52.406 [2024-06-08 01:02:10.538604] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:52.977 01:02:11 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:52.977 01:02:11 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:36:52.977 01:02:11 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:52.977 01:02:11 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:52.977 01:02:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:52.977 01:02:11 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:52.977 01:02:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:52.977 01:02:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:52.977 01:02:11 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.978 01:02:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:52.978 [2024-06-08 01:02:11.204970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:52.978 01:02:11 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.978 01:02:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:52.978 01:02:11 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:36:52.978 01:02:11 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:52.978 01:02:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:52.978 ************************************ 00:36:52.978 START TEST fio_dif_1_default 00:36:52.978 ************************************ 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:52.978 bdev_null0 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.978 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:53.239 [2024-06-08 01:02:11.289298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:53.239 01:02:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:53.239 { 00:36:53.239 "params": { 00:36:53.239 "name": "Nvme$subsystem", 00:36:53.239 "trtype": "$TEST_TRANSPORT", 00:36:53.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.239 "adrfam": "ipv4", 00:36:53.240 "trsvcid": "$NVMF_PORT", 00:36:53.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.240 "hdgst": ${hdgst:-false}, 00:36:53.240 "ddgst": ${ddgst:-false} 00:36:53.240 }, 00:36:53.240 "method": "bdev_nvme_attach_controller" 00:36:53.240 } 00:36:53.240 EOF 00:36:53.240 )") 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:53.240 "params": { 00:36:53.240 "name": "Nvme0", 00:36:53.240 "trtype": "tcp", 00:36:53.240 "traddr": "10.0.0.2", 00:36:53.240 "adrfam": "ipv4", 00:36:53.240 "trsvcid": "4420", 00:36:53.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.240 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.240 "hdgst": false, 00:36:53.240 "ddgst": false 00:36:53.240 }, 00:36:53.240 "method": "bdev_nvme_attach_controller" 00:36:53.240 }' 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:53.240 01:02:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.501 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:53.501 fio-3.35 00:36:53.501 Starting 1 thread 00:36:53.501 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.730 00:37:05.730 filename0: (groupid=0, jobs=1): err= 0: pid=693381: Sat Jun 8 01:02:22 2024 00:37:05.730 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10001msec) 00:37:05.730 slat (nsec): min=5662, max=31871, avg=6595.29, stdev=1608.02 00:37:05.730 clat (usec): min=41796, max=43945, avg=42002.83, stdev=173.11 00:37:05.730 lat (usec): min=41802, max=43976, avg=42009.42, stdev=173.60 00:37:05.730 clat percentiles (usec): 00:37:05.730 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:37:05.730 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:37:05.730 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:05.730 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:37:05.730 | 99.99th=[43779] 00:37:05.730 bw ( KiB/s): min= 352, max= 384, per=99.80%, avg=380.63, stdev=10.09, samples=19 00:37:05.730 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:37:05.730 lat (msec) : 50=100.00% 00:37:05.730 cpu : usr=95.62%, sys=4.19%, ctx=10, majf=0, minf=215 00:37:05.730 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.730 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.730 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:05.730 00:37:05.730 Run status group 0 (all jobs): 00:37:05.730 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10001-10001msec 00:37:05.730 01:02:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:05.730 01:02:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:05.730 01:02:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:05.730 01:02:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:05.730 01:02:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.731 00:37:05.731 real 0m11.199s 00:37:05.731 user 0m26.638s 00:37:05.731 sys 0m0.678s 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 ************************************ 00:37:05.731 END TEST fio_dif_1_default 00:37:05.731 ************************************ 00:37:05.731 01:02:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:05.731 01:02:22 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:05.731 01:02:22 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 ************************************ 00:37:05.731 START TEST fio_dif_1_multi_subsystems 00:37:05.731 ************************************ 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 bdev_null0 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 [2024-06-08 01:02:22.559718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 bdev_null1 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:05.731 { 00:37:05.731 "params": { 00:37:05.731 "name": "Nvme$subsystem", 00:37:05.731 "trtype": "$TEST_TRANSPORT", 00:37:05.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.731 "adrfam": "ipv4", 00:37:05.731 "trsvcid": "$NVMF_PORT", 00:37:05.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.731 "hdgst": ${hdgst:-false}, 00:37:05.731 "ddgst": ${ddgst:-false} 00:37:05.731 }, 00:37:05.731 "method": "bdev_nvme_attach_controller" 00:37:05.731 } 00:37:05.731 EOF 00:37:05.731 )") 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:05.731 { 00:37:05.731 "params": { 00:37:05.731 "name": "Nvme$subsystem", 00:37:05.731 "trtype": "$TEST_TRANSPORT", 00:37:05.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.731 "adrfam": "ipv4", 00:37:05.731 "trsvcid": "$NVMF_PORT", 00:37:05.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.731 "hdgst": ${hdgst:-false}, 00:37:05.731 "ddgst": ${ddgst:-false} 00:37:05.731 }, 00:37:05.731 "method": "bdev_nvme_attach_controller" 00:37:05.731 } 00:37:05.731 EOF 00:37:05.731 )") 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:37:05.731 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:05.731 "params": { 00:37:05.731 "name": "Nvme0", 00:37:05.731 "trtype": "tcp", 00:37:05.731 "traddr": "10.0.0.2", 00:37:05.731 "adrfam": "ipv4", 00:37:05.731 "trsvcid": "4420", 00:37:05.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:05.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:05.732 "hdgst": false, 00:37:05.732 "ddgst": false 00:37:05.732 }, 00:37:05.732 "method": "bdev_nvme_attach_controller" 00:37:05.732 },{ 00:37:05.732 "params": { 00:37:05.732 "name": "Nvme1", 00:37:05.732 "trtype": "tcp", 00:37:05.732 "traddr": "10.0.0.2", 00:37:05.732 "adrfam": "ipv4", 00:37:05.732 "trsvcid": "4420", 00:37:05.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:05.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:05.732 "hdgst": false, 00:37:05.732 "ddgst": false 00:37:05.732 }, 00:37:05.732 "method": "bdev_nvme_attach_controller" 00:37:05.732 }' 00:37:05.732 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:05.732 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:05.732 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.732 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.732 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:05.732 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:05.732 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:05.732 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:05.732 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:05.732 01:02:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.732 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:05.732 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:05.732 fio-3.35 00:37:05.732 Starting 2 threads 00:37:05.732 EAL: No free 2048 kB hugepages reported on node 1 00:37:15.736 00:37:15.736 filename0: (groupid=0, jobs=1): err= 0: pid=695580: Sat Jun 8 01:02:33 2024 00:37:15.736 read: IOPS=185, BW=742KiB/s (759kB/s)(7424KiB/10010msec) 00:37:15.736 slat (nsec): min=5647, max=40330, avg=6469.66, stdev=1660.15 00:37:15.736 clat (usec): min=893, max=42517, avg=21554.57, stdev=20424.34 00:37:15.736 lat (usec): min=901, max=42549, avg=21561.04, stdev=20424.36 00:37:15.736 clat percentiles (usec): 00:37:15.736 | 1.00th=[ 947], 5.00th=[ 963], 10.00th=[ 979], 20.00th=[ 996], 00:37:15.736 | 30.00th=[ 1012], 40.00th=[ 1156], 50.00th=[41157], 60.00th=[41681], 00:37:15.736 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:15.736 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:37:15.736 | 99.99th=[42730] 00:37:15.736 bw ( KiB/s): min= 672, max= 768, per=66.07%, avg=740.80, stdev=34.86, samples=20 00:37:15.736 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:37:15.736 lat (usec) : 1000=24.19% 00:37:15.736 lat (msec) : 2=25.59%, 50=50.22% 00:37:15.736 cpu : usr=96.82%, sys=2.92%, ctx=54, majf=0, minf=128 00:37:15.736 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:15.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.736 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.736 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:15.736 filename1: (groupid=0, jobs=1): err= 0: pid=695581: Sat Jun 8 01:02:33 2024 00:37:15.736 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:37:15.736 slat (nsec): min=5644, max=40292, avg=6664.43, stdev=2161.19 00:37:15.736 clat (usec): min=41881, max=43030, avg=41996.93, stdev=120.72 00:37:15.736 lat (usec): min=41886, max=43036, avg=42003.59, stdev=121.27 00:37:15.736 clat percentiles (usec): 00:37:15.736 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:37:15.736 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:37:15.736 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:15.736 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:37:15.736 | 99.99th=[43254] 00:37:15.736 bw ( KiB/s): min= 352, max= 384, per=33.93%, avg=380.80, stdev= 9.85, samples=20 00:37:15.736 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:37:15.736 lat (msec) : 50=100.00% 00:37:15.736 cpu : usr=96.87%, sys=2.93%, ctx=14, majf=0, minf=117 00:37:15.736 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:15.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.737 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.737 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:15.737 00:37:15.737 Run status group 0 (all jobs): 00:37:15.737 READ: bw=1120KiB/s (1147kB/s), 381KiB/s-742KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10010-10042msec 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:15.737 00:37:15.737 real 0m11.310s 00:37:15.737 user 0m34.418s 00:37:15.737 sys 0m0.878s 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:15.737 01:02:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:15.737 ************************************ 00:37:15.737 END TEST fio_dif_1_multi_subsystems 00:37:15.737 ************************************ 00:37:15.737 01:02:33 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:15.737 01:02:33 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:15.737 01:02:33 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:15.737 01:02:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:15.737 ************************************ 00:37:15.737 START TEST fio_dif_rand_params 00:37:15.737 ************************************ 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.737 bdev_null0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.737 [2024-06-08 01:02:33.946929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:15.737 { 00:37:15.737 "params": { 00:37:15.737 "name": "Nvme$subsystem", 00:37:15.737 "trtype": "$TEST_TRANSPORT", 00:37:15.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.737 "adrfam": "ipv4", 00:37:15.737 "trsvcid": "$NVMF_PORT", 00:37:15.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.737 "hdgst": ${hdgst:-false}, 00:37:15.737 "ddgst": ${ddgst:-false} 00:37:15.737 }, 00:37:15.737 "method": "bdev_nvme_attach_controller" 00:37:15.737 } 00:37:15.737 EOF 00:37:15.737 )") 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:15.737 "params": { 00:37:15.737 "name": "Nvme0", 00:37:15.737 "trtype": "tcp", 00:37:15.737 "traddr": "10.0.0.2", 00:37:15.737 "adrfam": "ipv4", 00:37:15.737 "trsvcid": "4420", 00:37:15.737 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.737 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.737 "hdgst": false, 00:37:15.737 "ddgst": false 00:37:15.737 }, 00:37:15.737 "method": "bdev_nvme_attach_controller" 00:37:15.737 }' 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:15.737 01:02:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:16.015 01:02:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:16.015 01:02:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:16.015 01:02:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:16.015 01:02:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.276 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:16.276 ... 00:37:16.276 fio-3.35 00:37:16.276 Starting 3 threads 00:37:16.276 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.889 00:37:22.889 filename0: (groupid=0, jobs=1): err= 0: pid=697883: Sat Jun 8 01:02:40 2024 00:37:22.889 read: IOPS=152, BW=19.1MiB/s (20.0MB/s)(95.5MiB/5005msec) 00:37:22.889 slat (nsec): min=5705, max=32440, avg=8396.96, stdev=1786.55 00:37:22.889 clat (usec): min=6272, max=93807, avg=19636.26, stdev=20302.64 00:37:22.889 lat (usec): min=6281, max=93816, avg=19644.65, stdev=20302.58 00:37:22.889 clat percentiles (usec): 00:37:22.889 | 1.00th=[ 6652], 5.00th=[ 7570], 10.00th=[ 8029], 20.00th=[ 8848], 00:37:22.889 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10552], 60.00th=[11076], 00:37:22.889 | 70.00th=[12256], 80.00th=[47973], 90.00th=[51643], 95.00th=[53740], 00:37:22.889 | 99.00th=[91751], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:37:22.889 | 99.99th=[93848] 00:37:22.889 bw ( KiB/s): min=13056, max=23808, per=34.42%, avg=19532.80, stdev=3802.94, samples=10 00:37:22.889 iops : min= 102, max= 186, avg=152.60, stdev=29.71, samples=10 00:37:22.889 lat (msec) : 10=40.45%, 20=39.53%, 50=3.53%, 100=16.49% 00:37:22.889 cpu : usr=96.24%, sys=3.48%, ctx=11, majf=0, minf=38 00:37:22.889 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.889 issued rwts: total=764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.889 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:22.889 filename0: (groupid=0, jobs=1): err= 0: pid=697885: Sat Jun 8 01:02:40 2024 00:37:22.889 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(67.2MiB/5048msec) 00:37:22.889 slat (nsec): min=5678, max=31704, avg=8429.72, stdev=2014.77 00:37:22.889 clat (usec): min=7236, max=95568, avg=28044.00, stdev=21640.89 00:37:22.889 lat (usec): min=7245, max=95577, avg=28052.43, stdev=21640.56 00:37:22.889 clat percentiles (usec): 00:37:22.889 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10552], 00:37:22.889 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13042], 60.00th=[15533], 00:37:22.889 | 70.00th=[50594], 80.00th=[51643], 90.00th=[53216], 95.00th=[54264], 00:37:22.889 | 99.00th=[91751], 99.50th=[92799], 99.90th=[95945], 99.95th=[95945], 00:37:22.889 | 99.99th=[95945] 00:37:22.889 bw ( KiB/s): min= 9216, max=18432, per=24.18%, avg=13721.60, stdev=3475.07, samples=10 00:37:22.889 iops : min= 72, max= 144, avg=107.20, stdev=27.15, samples=10 00:37:22.889 lat (msec) : 10=14.13%, 20=46.84%, 50=4.65%, 100=34.39% 00:37:22.889 cpu : usr=96.99%, sys=2.75%, ctx=9, majf=0, minf=108 00:37:22.889 IO depths : 1=7.4%, 2=92.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.889 issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.889 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:22.889 filename0: (groupid=0, jobs=1): err= 0: pid=697886: Sat Jun 8 01:02:40 2024 00:37:22.889 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(117MiB/5031msec) 00:37:22.889 slat (nsec): min=5706, max=33062, avg=8347.97, stdev=1718.24 00:37:22.889 clat (usec): min=6566, max=92999, avg=16111.15, stdev=15810.74 00:37:22.889 lat (usec): min=6575, max=93008, avg=16119.50, stdev=15810.97 00:37:22.889 clat percentiles (usec): 00:37:22.889 | 1.00th=[ 6980], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8717], 00:37:22.889 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[10945], 00:37:22.889 | 70.00th=[11863], 80.00th=[13042], 90.00th=[50070], 95.00th=[51643], 00:37:22.889 | 99.00th=[90702], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:37:22.889 | 99.99th=[92799] 00:37:22.889 bw ( KiB/s): min=13056, max=34560, per=42.09%, avg=23884.80, stdev=6630.84, samples=10 00:37:22.889 iops : min= 102, max= 270, avg=186.60, stdev=51.80, samples=10 00:37:22.889 lat (msec) : 10=42.95%, 20=43.80%, 50=2.78%, 100=10.47% 00:37:22.889 cpu : usr=96.52%, sys=3.24%, ctx=12, majf=0, minf=120 00:37:22.889 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:22.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.889 issued rwts: total=936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.889 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:22.889 00:37:22.889 Run status group 0 (all jobs): 00:37:22.889 READ: bw=55.4MiB/s (58.1MB/s), 13.3MiB/s-23.3MiB/s (14.0MB/s-24.4MB/s), io=280MiB (293MB), run=5005-5048msec 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.889 bdev_null0 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.889 [2024-06-08 01:02:40.195977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.889 bdev_null1 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.889 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.890 bdev_null2 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:22.890 { 00:37:22.890 "params": { 00:37:22.890 "name": "Nvme$subsystem", 00:37:22.890 "trtype": "$TEST_TRANSPORT", 00:37:22.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.890 "adrfam": "ipv4", 00:37:22.890 "trsvcid": "$NVMF_PORT", 00:37:22.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.890 "hdgst": ${hdgst:-false}, 00:37:22.890 "ddgst": ${ddgst:-false} 00:37:22.890 }, 00:37:22.890 "method": "bdev_nvme_attach_controller" 00:37:22.890 } 00:37:22.890 EOF 00:37:22.890 )") 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:22.890 { 00:37:22.890 "params": { 00:37:22.890 "name": "Nvme$subsystem", 00:37:22.890 "trtype": "$TEST_TRANSPORT", 00:37:22.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.890 "adrfam": "ipv4", 00:37:22.890 "trsvcid": "$NVMF_PORT", 00:37:22.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.890 "hdgst": ${hdgst:-false}, 00:37:22.890 "ddgst": ${ddgst:-false} 00:37:22.890 }, 00:37:22.890 "method": "bdev_nvme_attach_controller" 00:37:22.890 } 00:37:22.890 EOF 00:37:22.890 )") 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:22.890 { 00:37:22.890 "params": { 00:37:22.890 "name": "Nvme$subsystem", 00:37:22.890 "trtype": "$TEST_TRANSPORT", 00:37:22.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.890 "adrfam": "ipv4", 00:37:22.890 "trsvcid": "$NVMF_PORT", 00:37:22.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.890 "hdgst": ${hdgst:-false}, 00:37:22.890 "ddgst": ${ddgst:-false} 00:37:22.890 }, 00:37:22.890 "method": "bdev_nvme_attach_controller" 00:37:22.890 } 00:37:22.890 EOF 00:37:22.890 )") 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:22.890 "params": { 00:37:22.890 "name": "Nvme0", 00:37:22.890 "trtype": "tcp", 00:37:22.890 "traddr": "10.0.0.2", 00:37:22.890 "adrfam": "ipv4", 00:37:22.890 "trsvcid": "4420", 00:37:22.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.890 "hdgst": false, 00:37:22.890 "ddgst": false 00:37:22.890 }, 00:37:22.890 "method": "bdev_nvme_attach_controller" 00:37:22.890 },{ 00:37:22.890 "params": { 00:37:22.890 "name": "Nvme1", 00:37:22.890 "trtype": "tcp", 00:37:22.890 "traddr": "10.0.0.2", 00:37:22.890 "adrfam": "ipv4", 00:37:22.890 "trsvcid": "4420", 00:37:22.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:22.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:22.890 "hdgst": false, 00:37:22.890 "ddgst": false 00:37:22.890 }, 00:37:22.890 "method": "bdev_nvme_attach_controller" 00:37:22.890 },{ 00:37:22.890 "params": { 00:37:22.890 "name": "Nvme2", 00:37:22.890 "trtype": "tcp", 00:37:22.890 "traddr": "10.0.0.2", 00:37:22.890 "adrfam": "ipv4", 00:37:22.890 "trsvcid": "4420", 00:37:22.890 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:22.890 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:22.890 "hdgst": false, 00:37:22.890 "ddgst": false 00:37:22.890 }, 00:37:22.890 "method": "bdev_nvme_attach_controller" 00:37:22.890 }' 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:22.890 01:02:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.890 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:22.890 ... 00:37:22.890 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:22.891 ... 00:37:22.891 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:22.891 ... 00:37:22.891 fio-3.35 00:37:22.891 Starting 24 threads 00:37:22.891 EAL: No free 2048 kB hugepages reported on node 1 00:37:35.184 00:37:35.184 filename0: (groupid=0, jobs=1): err= 0: pid=699284: Sat Jun 8 01:02:51 2024 00:37:35.184 read: IOPS=516, BW=2065KiB/s (2115kB/s)(20.2MiB/10016msec) 00:37:35.184 slat (nsec): min=5816, max=98336, avg=13532.28, stdev=10323.62 00:37:35.184 clat (usec): min=3411, max=48985, avg=30864.00, stdev=4647.31 00:37:35.184 lat (usec): min=3430, max=48992, avg=30877.53, stdev=4647.42 00:37:35.184 clat percentiles (usec): 00:37:35.184 | 1.00th=[ 4752], 5.00th=[21627], 10.00th=[28443], 20.00th=[31327], 00:37:35.184 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.184 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:37:35.184 | 99.00th=[38536], 99.50th=[40633], 99.90th=[49021], 99.95th=[49021], 00:37:35.184 | 99.99th=[49021] 00:37:35.184 bw ( KiB/s): min= 1920, max= 2608, per=4.31%, avg=2064.80, stdev=152.01, samples=20 00:37:35.184 iops : min= 480, max= 652, avg=516.20, stdev=38.00, samples=20 00:37:35.184 lat (msec) : 4=0.31%, 10=1.55%, 20=1.76%, 50=96.38% 00:37:35.184 cpu : usr=97.10%, sys=1.71%, ctx=112, majf=0, minf=41 00:37:35.184 IO depths : 1=4.7%, 2=9.5%, 4=20.3%, 8=57.0%, 16=8.5%, 32=0.0%, >=64=0.0% 00:37:35.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.184 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.184 issued rwts: total=5172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.184 filename0: (groupid=0, jobs=1): err= 0: pid=699285: Sat Jun 8 01:02:51 2024 00:37:35.184 read: IOPS=511, BW=2047KiB/s (2096kB/s)(20.0MiB/10007msec) 00:37:35.184 slat (nsec): min=5822, max=72045, avg=15536.98, stdev=10892.28 00:37:35.184 clat (usec): min=4193, max=57599, avg=31148.83, stdev=4271.40 00:37:35.184 lat (usec): min=4206, max=57620, avg=31164.37, stdev=4272.44 00:37:35.184 clat percentiles (usec): 00:37:35.184 | 1.00th=[ 7373], 5.00th=[23987], 10.00th=[30540], 20.00th=[31327], 00:37:35.184 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.184 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[33424], 00:37:35.184 | 99.00th=[39584], 99.50th=[44303], 99.90th=[47973], 99.95th=[57410], 00:37:35.184 | 99.99th=[57410] 00:37:35.184 bw ( KiB/s): min= 1920, max= 2436, per=4.26%, avg=2041.80, stdev=123.86, samples=20 00:37:35.184 iops : min= 480, max= 609, avg=510.45, stdev=30.96, samples=20 00:37:35.184 lat (msec) : 10=1.56%, 20=0.88%, 50=97.50%, 100=0.06% 00:37:35.184 cpu : usr=98.88%, sys=0.79%, ctx=62, majf=0, minf=24 00:37:35.184 IO depths : 1=4.3%, 2=8.8%, 4=19.6%, 8=58.4%, 16=8.8%, 32=0.0%, >=64=0.0% 00:37:35.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.184 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.184 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.184 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.184 filename0: (groupid=0, jobs=1): err= 0: pid=699286: Sat Jun 8 01:02:51 2024 00:37:35.184 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10007msec) 00:37:35.184 slat (nsec): min=5837, max=87870, avg=18921.94, stdev=12701.35 00:37:35.184 clat (usec): min=13241, max=58387, avg=31813.68, stdev=1908.28 00:37:35.184 lat (usec): min=13270, max=58403, avg=31832.60, stdev=1908.17 00:37:35.184 clat percentiles (usec): 00:37:35.184 | 1.00th=[25822], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:37:35.184 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.184 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32900], 00:37:35.184 | 99.00th=[33817], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:37:35.185 | 99.99th=[58459] 00:37:35.185 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1996.95, stdev=64.15, samples=20 00:37:35.185 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:37:35.185 lat (msec) : 20=0.92%, 50=99.04%, 100=0.04% 00:37:35.185 cpu : usr=99.07%, sys=0.63%, ctx=34, majf=0, minf=36 00:37:35.185 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:35.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.185 filename0: (groupid=0, jobs=1): err= 0: pid=699287: Sat Jun 8 01:02:51 2024 00:37:35.185 read: IOPS=499, BW=1996KiB/s (2044kB/s)(19.5MiB/10027msec) 00:37:35.185 slat (nsec): min=5710, max=94828, avg=16931.21, stdev=13592.79 00:37:35.185 clat (usec): min=15302, max=55335, avg=31931.44, stdev=4472.79 00:37:35.185 lat (usec): min=15311, max=55369, avg=31948.37, stdev=4473.87 00:37:35.185 clat percentiles (usec): 00:37:35.185 | 1.00th=[18482], 5.00th=[22938], 10.00th=[30540], 20.00th=[31327], 00:37:35.185 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:35.185 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[41681], 00:37:35.185 | 99.00th=[47449], 99.50th=[50070], 99.90th=[54264], 99.95th=[54264], 00:37:35.185 | 99.99th=[55313] 00:37:35.185 bw ( KiB/s): min= 1872, max= 2128, per=4.17%, avg=1995.20, stdev=69.28, samples=20 00:37:35.185 iops : min= 468, max= 532, avg=498.80, stdev=17.32, samples=20 00:37:35.185 lat (msec) : 20=2.54%, 50=96.90%, 100=0.56% 00:37:35.185 cpu : usr=97.18%, sys=1.84%, ctx=35, majf=0, minf=33 00:37:35.185 IO depths : 1=2.5%, 2=5.4%, 4=14.8%, 8=65.6%, 16=11.6%, 32=0.0%, >=64=0.0% 00:37:35.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 complete : 0=0.0%, 4=92.1%, 8=3.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 issued rwts: total=5004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.185 filename0: (groupid=0, jobs=1): err= 0: pid=699288: Sat Jun 8 01:02:51 2024 00:37:35.185 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10009msec) 00:37:35.185 slat (usec): min=5, max=140, avg=24.83, stdev=16.66 00:37:35.185 clat (usec): min=18613, max=48961, avg=32188.16, stdev=3280.06 00:37:35.185 lat (usec): min=18648, max=48974, avg=32212.99, stdev=3278.83 00:37:35.185 clat percentiles (usec): 00:37:35.185 | 1.00th=[23200], 5.00th=[28443], 10.00th=[31065], 20.00th=[31327], 00:37:35.185 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:37:35.185 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[39060], 00:37:35.185 | 99.00th=[46924], 99.50th=[47973], 99.90th=[48497], 99.95th=[49021], 00:37:35.185 | 99.99th=[49021] 00:37:35.185 bw ( KiB/s): min= 1840, max= 2048, per=4.12%, avg=1971.20, stdev=69.23, samples=20 00:37:35.185 iops : min= 460, max= 512, avg=492.80, stdev=17.31, samples=20 00:37:35.185 lat (msec) : 20=0.38%, 50=99.62% 00:37:35.185 cpu : usr=97.94%, sys=1.42%, ctx=33, majf=0, minf=29 00:37:35.185 IO depths : 1=3.7%, 2=7.6%, 4=18.6%, 8=60.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:37:35.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 complete : 0=0.0%, 4=92.7%, 8=2.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.185 filename0: (groupid=0, jobs=1): err= 0: pid=699289: Sat Jun 8 01:02:51 2024 00:37:35.185 read: IOPS=493, BW=1972KiB/s (2019kB/s)(19.3MiB/10030msec) 00:37:35.185 slat (nsec): min=5698, max=87256, avg=20130.09, stdev=15719.91 00:37:35.185 clat (usec): min=15234, max=59226, avg=32287.60, stdev=4537.55 00:37:35.185 lat (usec): min=15242, max=59234, avg=32307.73, stdev=4537.33 00:37:35.185 clat percentiles (usec): 00:37:35.185 | 1.00th=[18482], 5.00th=[25297], 10.00th=[30802], 20.00th=[31327], 00:37:35.185 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.185 | 70.00th=[32113], 80.00th=[32375], 90.00th=[35390], 95.00th=[41681], 00:37:35.185 | 99.00th=[50070], 99.50th=[51643], 99.90th=[56886], 99.95th=[58983], 00:37:35.185 | 99.99th=[58983] 00:37:35.185 bw ( KiB/s): min= 1840, max= 2048, per=4.12%, avg=1971.60, stdev=70.53, samples=20 00:37:35.185 iops : min= 460, max= 512, avg=492.90, stdev=17.63, samples=20 00:37:35.185 lat (msec) : 20=1.52%, 50=97.53%, 100=0.95% 00:37:35.185 cpu : usr=98.95%, sys=0.71%, ctx=38, majf=0, minf=22 00:37:35.185 IO depths : 1=2.9%, 2=5.7%, 4=15.1%, 8=65.2%, 16=11.1%, 32=0.0%, >=64=0.0% 00:37:35.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 complete : 0=0.0%, 4=91.9%, 8=3.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 issued rwts: total=4945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.185 filename0: (groupid=0, jobs=1): err= 0: pid=699290: Sat Jun 8 01:02:51 2024 00:37:35.185 read: IOPS=498, BW=1996KiB/s (2044kB/s)(19.5MiB/10005msec) 00:37:35.185 slat (nsec): min=5831, max=91723, avg=21163.26, stdev=14398.30 00:37:35.185 clat (usec): min=14727, max=67405, avg=31859.45, stdev=2361.62 00:37:35.185 lat (usec): min=14733, max=67423, avg=31880.61, stdev=2361.49 00:37:35.185 clat percentiles (usec): 00:37:35.185 | 1.00th=[29492], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:37:35.185 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:37:35.185 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:37:35.185 | 99.00th=[33424], 99.50th=[34341], 99.90th=[67634], 99.95th=[67634], 00:37:35.185 | 99.99th=[67634] 00:37:35.185 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1987.37, stdev=78.31, samples=19 00:37:35.185 iops : min= 448, max= 512, avg=496.84, stdev=19.58, samples=19 00:37:35.185 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:37:35.185 cpu : usr=99.14%, sys=0.55%, ctx=62, majf=0, minf=27 00:37:35.185 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:35.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.185 filename0: (groupid=0, jobs=1): err= 0: pid=699291: Sat Jun 8 01:02:51 2024 00:37:35.185 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10013msec) 00:37:35.185 slat (nsec): min=5888, max=98904, avg=28260.20, stdev=17249.78 00:37:35.185 clat (usec): min=13277, max=49206, avg=31748.33, stdev=1719.85 00:37:35.185 lat (usec): min=13284, max=49233, avg=31776.59, stdev=1719.78 00:37:35.185 clat percentiles (usec): 00:37:35.185 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:37:35.185 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:37:35.185 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32900], 00:37:35.185 | 99.00th=[33424], 99.50th=[33817], 99.90th=[49021], 99.95th=[49021], 00:37:35.185 | 99.99th=[49021] 00:37:35.185 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=1996.95, stdev=76.42, samples=20 00:37:35.185 iops : min= 480, max= 544, avg=499.20, stdev=19.14, samples=20 00:37:35.185 lat (msec) : 20=0.64%, 50=99.36% 00:37:35.185 cpu : usr=99.19%, sys=0.53%, ctx=14, majf=0, minf=29 00:37:35.185 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:35.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.185 filename1: (groupid=0, jobs=1): err= 0: pid=699292: Sat Jun 8 01:02:51 2024 00:37:35.185 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10010msec) 00:37:35.185 slat (nsec): min=5829, max=90375, avg=18630.47, stdev=16393.19 00:37:35.185 clat (usec): min=13536, max=46703, avg=31836.05, stdev=1822.27 00:37:35.185 lat (usec): min=13543, max=46710, avg=31854.68, stdev=1821.74 00:37:35.185 clat percentiles (usec): 00:37:35.185 | 1.00th=[28181], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:37:35.185 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.185 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:37:35.185 | 99.00th=[33817], 99.50th=[44303], 99.90th=[45351], 99.95th=[45876], 00:37:35.185 | 99.99th=[46924] 00:37:35.185 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1996.35, stdev=63.99, samples=20 00:37:35.185 iops : min= 480, max= 512, avg=499.05, stdev=15.97, samples=20 00:37:35.185 lat (msec) : 20=0.84%, 50=99.16% 00:37:35.185 cpu : usr=97.38%, sys=1.40%, ctx=71, majf=0, minf=34 00:37:35.185 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:35.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.185 filename1: (groupid=0, jobs=1): err= 0: pid=699293: Sat Jun 8 01:02:51 2024 00:37:35.185 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:37:35.185 slat (nsec): min=5845, max=92736, avg=22109.42, stdev=14384.80 00:37:35.185 clat (usec): min=18288, max=44029, avg=31797.15, stdev=1192.33 00:37:35.185 lat (usec): min=18295, max=44048, avg=31819.26, stdev=1192.34 00:37:35.185 clat percentiles (usec): 00:37:35.185 | 1.00th=[28443], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:37:35.185 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.185 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:37:35.185 | 99.00th=[33817], 99.50th=[33817], 99.90th=[43779], 99.95th=[43779], 00:37:35.185 | 99.99th=[43779] 00:37:35.185 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1996.80, stdev=62.85, samples=20 00:37:35.185 iops : min= 480, max= 512, avg=499.20, stdev=15.71, samples=20 00:37:35.185 lat (msec) : 20=0.32%, 50=99.68% 00:37:35.185 cpu : usr=97.11%, sys=1.51%, ctx=93, majf=0, minf=24 00:37:35.185 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:35.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.185 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.185 filename1: (groupid=0, jobs=1): err= 0: pid=699294: Sat Jun 8 01:02:51 2024 00:37:35.185 read: IOPS=503, BW=2013KiB/s (2061kB/s)(19.7MiB/10005msec) 00:37:35.185 slat (nsec): min=6009, max=84910, avg=25286.46, stdev=14521.23 00:37:35.185 clat (usec): min=13283, max=70732, avg=31574.23, stdev=2711.71 00:37:35.186 lat (usec): min=13290, max=70753, avg=31599.51, stdev=2712.53 00:37:35.186 clat percentiles (usec): 00:37:35.186 | 1.00th=[18744], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:37:35.186 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:37:35.186 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32900], 00:37:35.186 | 99.00th=[38536], 99.50th=[42206], 99.90th=[55837], 99.95th=[55837], 00:37:35.186 | 99.99th=[70779] 00:37:35.186 bw ( KiB/s): min= 1920, max= 2064, per=4.19%, avg=2004.84, stdev=61.37, samples=19 00:37:35.186 iops : min= 480, max= 516, avg=501.21, stdev=15.34, samples=19 00:37:35.186 lat (msec) : 20=1.07%, 50=98.61%, 100=0.32% 00:37:35.186 cpu : usr=97.31%, sys=1.39%, ctx=50, majf=0, minf=32 00:37:35.186 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:35.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 issued rwts: total=5034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.186 filename1: (groupid=0, jobs=1): err= 0: pid=699295: Sat Jun 8 01:02:51 2024 00:37:35.186 read: IOPS=509, BW=2039KiB/s (2088kB/s)(19.9MiB/10013msec) 00:37:35.186 slat (nsec): min=5827, max=63855, avg=7842.59, stdev=3318.55 00:37:35.186 clat (usec): min=3511, max=47447, avg=31320.15, stdev=4165.77 00:37:35.186 lat (usec): min=3541, max=47453, avg=31327.99, stdev=4164.71 00:37:35.186 clat percentiles (usec): 00:37:35.186 | 1.00th=[ 4752], 5.00th=[30278], 10.00th=[31327], 20.00th=[31589], 00:37:35.186 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:35.186 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:37:35.186 | 99.00th=[33817], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:37:35.186 | 99.99th=[47449] 00:37:35.186 bw ( KiB/s): min= 1920, max= 2560, per=4.25%, avg=2035.20, stdev=137.11, samples=20 00:37:35.186 iops : min= 480, max= 640, avg=508.80, stdev=34.28, samples=20 00:37:35.186 lat (msec) : 4=0.31%, 10=1.57%, 20=1.04%, 50=97.08% 00:37:35.186 cpu : usr=99.22%, sys=0.51%, ctx=6, majf=0, minf=33 00:37:35.186 IO depths : 1=5.5%, 2=11.6%, 4=24.6%, 8=51.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:35.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.186 filename1: (groupid=0, jobs=1): err= 0: pid=699296: Sat Jun 8 01:02:51 2024 00:37:35.186 read: IOPS=504, BW=2019KiB/s (2067kB/s)(19.7MiB/10013msec) 00:37:35.186 slat (nsec): min=5817, max=76456, avg=14756.93, stdev=11452.62 00:37:35.186 clat (usec): min=9837, max=44506, avg=31586.86, stdev=2337.05 00:37:35.186 lat (usec): min=9844, max=44543, avg=31601.61, stdev=2338.21 00:37:35.186 clat percentiles (usec): 00:37:35.186 | 1.00th=[19268], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:37:35.186 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.186 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:37:35.186 | 99.00th=[33817], 99.50th=[33817], 99.90th=[44303], 99.95th=[44303], 00:37:35.186 | 99.99th=[44303] 00:37:35.186 bw ( KiB/s): min= 1920, max= 2368, per=4.21%, avg=2015.20, stdev=102.58, samples=20 00:37:35.186 iops : min= 480, max= 592, avg=503.80, stdev=25.64, samples=20 00:37:35.186 lat (msec) : 10=0.24%, 20=0.99%, 50=98.77% 00:37:35.186 cpu : usr=97.46%, sys=1.41%, ctx=48, majf=0, minf=41 00:37:35.186 IO depths : 1=5.9%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:35.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 issued rwts: total=5054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.186 filename1: (groupid=0, jobs=1): err= 0: pid=699297: Sat Jun 8 01:02:51 2024 00:37:35.186 read: IOPS=498, BW=1996KiB/s (2044kB/s)(19.5MiB/10005msec) 00:37:35.186 slat (nsec): min=5831, max=86329, avg=21748.49, stdev=14334.68 00:37:35.186 clat (usec): min=13633, max=67346, avg=31855.48, stdev=2369.83 00:37:35.186 lat (usec): min=13640, max=67363, avg=31877.23, stdev=2369.66 00:37:35.186 clat percentiles (usec): 00:37:35.186 | 1.00th=[29492], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:37:35.186 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.186 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:37:35.186 | 99.00th=[33424], 99.50th=[34341], 99.90th=[67634], 99.95th=[67634], 00:37:35.186 | 99.99th=[67634] 00:37:35.186 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1987.37, stdev=78.31, samples=19 00:37:35.186 iops : min= 448, max= 512, avg=496.84, stdev=19.58, samples=19 00:37:35.186 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:37:35.186 cpu : usr=99.12%, sys=0.59%, ctx=14, majf=0, minf=24 00:37:35.186 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:35.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.186 filename1: (groupid=0, jobs=1): err= 0: pid=699298: Sat Jun 8 01:02:51 2024 00:37:35.186 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:37:35.186 slat (nsec): min=5834, max=80891, avg=19932.18, stdev=13512.27 00:37:35.186 clat (usec): min=17889, max=48552, avg=31827.02, stdev=1288.41 00:37:35.186 lat (usec): min=17895, max=48562, avg=31846.96, stdev=1288.65 00:37:35.186 clat percentiles (usec): 00:37:35.186 | 1.00th=[27919], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:37:35.186 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.186 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:37:35.186 | 99.00th=[33817], 99.50th=[34866], 99.90th=[43779], 99.95th=[43779], 00:37:35.186 | 99.99th=[48497] 00:37:35.186 bw ( KiB/s): min= 1920, max= 2064, per=4.17%, avg=1996.80, stdev=64.55, samples=20 00:37:35.186 iops : min= 480, max= 516, avg=499.20, stdev=16.14, samples=20 00:37:35.186 lat (msec) : 20=0.36%, 50=99.64% 00:37:35.186 cpu : usr=97.27%, sys=1.44%, ctx=63, majf=0, minf=31 00:37:35.186 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:35.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.186 filename1: (groupid=0, jobs=1): err= 0: pid=699299: Sat Jun 8 01:02:51 2024 00:37:35.186 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:37:35.186 slat (nsec): min=5829, max=81163, avg=20529.91, stdev=14374.44 00:37:35.186 clat (usec): min=17554, max=44113, avg=31799.54, stdev=1265.01 00:37:35.186 lat (usec): min=17561, max=44135, avg=31820.06, stdev=1265.27 00:37:35.186 clat percentiles (usec): 00:37:35.186 | 1.00th=[27919], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:37:35.186 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.186 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:37:35.186 | 99.00th=[33817], 99.50th=[34341], 99.90th=[43779], 99.95th=[43779], 00:37:35.186 | 99.99th=[44303] 00:37:35.186 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1996.80, stdev=64.34, samples=20 00:37:35.186 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:37:35.186 lat (msec) : 20=0.32%, 50=99.68% 00:37:35.186 cpu : usr=97.22%, sys=1.44%, ctx=84, majf=0, minf=26 00:37:35.186 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:35.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.186 filename2: (groupid=0, jobs=1): err= 0: pid=699300: Sat Jun 8 01:02:51 2024 00:37:35.186 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:37:35.186 slat (nsec): min=5840, max=89135, avg=21853.84, stdev=14767.66 00:37:35.186 clat (usec): min=17321, max=52486, avg=31796.09, stdev=1566.03 00:37:35.186 lat (usec): min=17327, max=52493, avg=31817.95, stdev=1566.22 00:37:35.186 clat percentiles (usec): 00:37:35.186 | 1.00th=[27657], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:37:35.186 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:37:35.186 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:37:35.186 | 99.00th=[33817], 99.50th=[41157], 99.90th=[44303], 99.95th=[46400], 00:37:35.186 | 99.99th=[52691] 00:37:35.186 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1996.80, stdev=62.85, samples=20 00:37:35.186 iops : min= 480, max= 512, avg=499.20, stdev=15.71, samples=20 00:37:35.186 lat (msec) : 20=0.32%, 50=99.64%, 100=0.04% 00:37:35.186 cpu : usr=99.08%, sys=0.55%, ctx=62, majf=0, minf=25 00:37:35.186 IO depths : 1=5.4%, 2=11.6%, 4=24.9%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:35.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.186 filename2: (groupid=0, jobs=1): err= 0: pid=699301: Sat Jun 8 01:02:51 2024 00:37:35.186 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10005msec) 00:37:35.186 slat (nsec): min=6040, max=88496, avg=27610.69, stdev=16362.40 00:37:35.186 clat (usec): min=13110, max=40817, avg=31693.28, stdev=1501.71 00:37:35.186 lat (usec): min=13139, max=40833, avg=31720.89, stdev=1502.28 00:37:35.186 clat percentiles (usec): 00:37:35.186 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:37:35.186 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:37:35.186 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:37:35.186 | 99.00th=[33424], 99.50th=[33817], 99.90th=[40633], 99.95th=[40633], 00:37:35.186 | 99.99th=[40633] 00:37:35.186 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.26, stdev=64.74, samples=19 00:37:35.186 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:37:35.186 lat (msec) : 20=0.64%, 50=99.36% 00:37:35.186 cpu : usr=98.82%, sys=0.74%, ctx=180, majf=0, minf=32 00:37:35.186 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:35.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.187 filename2: (groupid=0, jobs=1): err= 0: pid=699302: Sat Jun 8 01:02:51 2024 00:37:35.187 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10013msec) 00:37:35.187 slat (nsec): min=5843, max=73070, avg=15077.42, stdev=10381.81 00:37:35.187 clat (usec): min=17272, max=56691, avg=31870.67, stdev=1338.47 00:37:35.187 lat (usec): min=17278, max=56739, avg=31885.75, stdev=1339.02 00:37:35.187 clat percentiles (usec): 00:37:35.187 | 1.00th=[27919], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:37:35.187 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:35.187 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32637], 95.00th=[32637], 00:37:35.187 | 99.00th=[33817], 99.50th=[34866], 99.90th=[44303], 99.95th=[44303], 00:37:35.187 | 99.99th=[56886] 00:37:35.187 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1996.80, stdev=64.34, samples=20 00:37:35.187 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:37:35.187 lat (msec) : 20=0.32%, 50=99.64%, 100=0.04% 00:37:35.187 cpu : usr=99.24%, sys=0.47%, ctx=33, majf=0, minf=31 00:37:35.187 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:35.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.187 filename2: (groupid=0, jobs=1): err= 0: pid=699303: Sat Jun 8 01:02:51 2024 00:37:35.187 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10006msec) 00:37:35.187 slat (nsec): min=5906, max=89524, avg=29791.81, stdev=17462.56 00:37:35.187 clat (usec): min=13248, max=57271, avg=31676.99, stdev=1628.43 00:37:35.187 lat (usec): min=13256, max=57289, avg=31706.79, stdev=1629.13 00:37:35.187 clat percentiles (usec): 00:37:35.187 | 1.00th=[28443], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:37:35.187 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:37:35.187 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:37:35.187 | 99.00th=[33424], 99.50th=[33817], 99.90th=[41681], 99.95th=[41681], 00:37:35.187 | 99.99th=[57410] 00:37:35.187 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.11, stdev=64.93, samples=19 00:37:35.187 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:37:35.187 lat (msec) : 20=0.68%, 50=99.28%, 100=0.04% 00:37:35.187 cpu : usr=98.73%, sys=0.87%, ctx=114, majf=0, minf=22 00:37:35.187 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:35.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.187 filename2: (groupid=0, jobs=1): err= 0: pid=699304: Sat Jun 8 01:02:51 2024 00:37:35.187 read: IOPS=484, BW=1939KiB/s (1986kB/s)(18.9MiB/10006msec) 00:37:35.187 slat (nsec): min=5801, max=90012, avg=15415.46, stdev=12073.57 00:37:35.187 clat (usec): min=6939, max=65029, avg=32924.87, stdev=5870.26 00:37:35.187 lat (usec): min=6945, max=65048, avg=32940.29, stdev=5868.55 00:37:35.187 clat percentiles (usec): 00:37:35.187 | 1.00th=[16319], 5.00th=[24511], 10.00th=[30540], 20.00th=[31589], 00:37:35.187 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:35.187 | 70.00th=[32375], 80.00th=[32900], 90.00th=[40633], 95.00th=[46400], 00:37:35.187 | 99.00th=[49546], 99.50th=[51643], 99.90th=[64750], 99.95th=[64750], 00:37:35.187 | 99.99th=[65274] 00:37:35.187 bw ( KiB/s): min= 1760, max= 2024, per=4.02%, avg=1926.74, stdev=58.20, samples=19 00:37:35.187 iops : min= 440, max= 506, avg=481.68, stdev=14.55, samples=19 00:37:35.187 lat (msec) : 10=0.06%, 20=3.28%, 50=95.86%, 100=0.80% 00:37:35.187 cpu : usr=99.00%, sys=0.71%, ctx=11, majf=0, minf=27 00:37:35.187 IO depths : 1=0.8%, 2=1.8%, 4=9.3%, 8=73.6%, 16=14.5%, 32=0.0%, >=64=0.0% 00:37:35.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 complete : 0=0.0%, 4=90.8%, 8=6.2%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 issued rwts: total=4851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.187 filename2: (groupid=0, jobs=1): err= 0: pid=699305: Sat Jun 8 01:02:51 2024 00:37:35.187 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10009msec) 00:37:35.187 slat (nsec): min=5860, max=79684, avg=21183.04, stdev=14634.71 00:37:35.187 clat (usec): min=22093, max=44558, avg=31891.18, stdev=1143.14 00:37:35.187 lat (usec): min=22100, max=44579, avg=31912.36, stdev=1142.42 00:37:35.187 clat percentiles (usec): 00:37:35.187 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:37:35.187 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:37:35.187 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:37:35.187 | 99.00th=[34341], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:37:35.187 | 99.99th=[44303] 00:37:35.187 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1990.40, stdev=65.33, samples=20 00:37:35.187 iops : min= 480, max= 512, avg=497.60, stdev=16.33, samples=20 00:37:35.187 lat (msec) : 50=100.00% 00:37:35.187 cpu : usr=98.99%, sys=0.71%, ctx=11, majf=0, minf=19 00:37:35.187 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:35.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.187 filename2: (groupid=0, jobs=1): err= 0: pid=699306: Sat Jun 8 01:02:51 2024 00:37:35.187 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.4MiB/10001msec) 00:37:35.187 slat (nsec): min=5819, max=92723, avg=25341.84, stdev=17034.98 00:37:35.187 clat (usec): min=16971, max=59025, avg=32020.50, stdev=3506.24 00:37:35.187 lat (usec): min=16979, max=59047, avg=32045.84, stdev=3505.37 00:37:35.187 clat percentiles (usec): 00:37:35.187 | 1.00th=[20579], 5.00th=[28443], 10.00th=[31065], 20.00th=[31327], 00:37:35.187 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:37:35.187 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32900], 95.00th=[36439], 00:37:35.187 | 99.00th=[48497], 99.50th=[49546], 99.90th=[58983], 99.95th=[58983], 00:37:35.187 | 99.99th=[58983] 00:37:35.187 bw ( KiB/s): min= 1840, max= 2096, per=4.13%, avg=1975.16, stdev=71.80, samples=19 00:37:35.187 iops : min= 460, max= 524, avg=493.79, stdev=17.95, samples=19 00:37:35.187 lat (msec) : 20=0.83%, 50=98.75%, 100=0.42% 00:37:35.187 cpu : usr=96.85%, sys=1.80%, ctx=107, majf=0, minf=38 00:37:35.187 IO depths : 1=4.4%, 2=8.8%, 4=19.7%, 8=58.3%, 16=8.9%, 32=0.0%, >=64=0.0% 00:37:35.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 complete : 0=0.0%, 4=92.9%, 8=2.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 issued rwts: total=4963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.187 filename2: (groupid=0, jobs=1): err= 0: pid=699307: Sat Jun 8 01:02:51 2024 00:37:35.187 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.7MiB/10008msec) 00:37:35.187 slat (nsec): min=5810, max=92862, avg=21365.67, stdev=16233.31 00:37:35.187 clat (usec): min=11068, max=66103, avg=33405.38, stdev=5377.97 00:37:35.187 lat (usec): min=11074, max=66120, avg=33426.75, stdev=5375.93 00:37:35.187 clat percentiles (usec): 00:37:35.187 | 1.00th=[18220], 5.00th=[27395], 10.00th=[31065], 20.00th=[31589], 00:37:35.187 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:35.187 | 70.00th=[32375], 80.00th=[33817], 90.00th=[42730], 95.00th=[45876], 00:37:35.187 | 99.00th=[49546], 99.50th=[51119], 99.90th=[54264], 99.95th=[54264], 00:37:35.187 | 99.99th=[66323] 00:37:35.187 bw ( KiB/s): min= 1520, max= 2048, per=3.98%, avg=1903.60, stdev=119.55, samples=20 00:37:35.187 iops : min= 380, max= 512, avg=475.90, stdev=29.89, samples=20 00:37:35.187 lat (msec) : 20=1.80%, 50=97.30%, 100=0.90% 00:37:35.187 cpu : usr=99.12%, sys=0.60%, ctx=11, majf=0, minf=45 00:37:35.187 IO depths : 1=0.1%, 2=2.0%, 4=12.6%, 8=70.8%, 16=14.5%, 32=0.0%, >=64=0.0% 00:37:35.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 complete : 0=0.0%, 4=91.6%, 8=4.8%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.187 issued rwts: total=4775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:35.187 00:37:35.187 Run status group 0 (all jobs): 00:37:35.187 READ: bw=46.7MiB/s (49.0MB/s), 1908KiB/s-2065KiB/s (1954kB/s-2115kB/s), io=469MiB (492MB), run=10001-10030msec 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.187 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 bdev_null0 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 [2024-06-08 01:02:51.814312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 bdev_null1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.188 { 00:37:35.188 "params": { 00:37:35.188 "name": "Nvme$subsystem", 00:37:35.188 "trtype": "$TEST_TRANSPORT", 00:37:35.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.188 "adrfam": "ipv4", 00:37:35.188 "trsvcid": "$NVMF_PORT", 00:37:35.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.188 "hdgst": ${hdgst:-false}, 00:37:35.188 "ddgst": ${ddgst:-false} 00:37:35.188 }, 00:37:35.188 "method": "bdev_nvme_attach_controller" 00:37:35.188 } 00:37:35.188 EOF 00:37:35.188 )") 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.188 { 00:37:35.188 "params": { 00:37:35.188 "name": "Nvme$subsystem", 00:37:35.188 "trtype": "$TEST_TRANSPORT", 00:37:35.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.188 "adrfam": "ipv4", 00:37:35.188 "trsvcid": "$NVMF_PORT", 00:37:35.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.188 "hdgst": ${hdgst:-false}, 00:37:35.188 "ddgst": ${ddgst:-false} 00:37:35.188 }, 00:37:35.188 "method": "bdev_nvme_attach_controller" 00:37:35.188 } 00:37:35.188 EOF 00:37:35.188 )") 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:35.188 01:02:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:35.188 "params": { 00:37:35.188 "name": "Nvme0", 00:37:35.188 "trtype": "tcp", 00:37:35.188 "traddr": "10.0.0.2", 00:37:35.188 "adrfam": "ipv4", 00:37:35.188 "trsvcid": "4420", 00:37:35.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:35.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:35.188 "hdgst": false, 00:37:35.188 "ddgst": false 00:37:35.188 }, 00:37:35.188 "method": "bdev_nvme_attach_controller" 00:37:35.188 },{ 00:37:35.188 "params": { 00:37:35.188 "name": "Nvme1", 00:37:35.188 "trtype": "tcp", 00:37:35.188 "traddr": "10.0.0.2", 00:37:35.188 "adrfam": "ipv4", 00:37:35.188 "trsvcid": "4420", 00:37:35.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:35.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:35.188 "hdgst": false, 00:37:35.188 "ddgst": false 00:37:35.188 }, 00:37:35.188 "method": "bdev_nvme_attach_controller" 00:37:35.189 }' 00:37:35.189 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:35.189 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:35.189 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:35.189 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:35.189 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:35.189 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:35.189 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:35.189 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:35.189 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:35.189 01:02:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.189 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:35.189 ... 00:37:35.189 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:35.189 ... 00:37:35.189 fio-3.35 00:37:35.189 Starting 4 threads 00:37:35.189 EAL: No free 2048 kB hugepages reported on node 1 00:37:40.493 00:37:40.493 filename0: (groupid=0, jobs=1): err= 0: pid=701670: Sat Jun 8 01:02:58 2024 00:37:40.493 read: IOPS=2057, BW=16.1MiB/s (16.9MB/s)(80.4MiB/5001msec) 00:37:40.493 slat (nsec): min=5647, max=31726, avg=6206.42, stdev=1576.26 00:37:40.493 clat (usec): min=1719, max=6822, avg=3870.93, stdev=640.83 00:37:40.493 lat (usec): min=1726, max=6828, avg=3877.14, stdev=640.74 00:37:40.493 clat percentiles (usec): 00:37:40.493 | 1.00th=[ 2606], 5.00th=[ 2900], 10.00th=[ 3130], 20.00th=[ 3359], 00:37:40.494 | 30.00th=[ 3523], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 3916], 00:37:40.494 | 70.00th=[ 4113], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5080], 00:37:40.494 | 99.00th=[ 5669], 99.50th=[ 5932], 99.90th=[ 6259], 99.95th=[ 6456], 00:37:40.494 | 99.99th=[ 6587] 00:37:40.494 bw ( KiB/s): min=15920, max=16784, per=24.83%, avg=16444.44, stdev=272.56, samples=9 00:37:40.494 iops : min= 1990, max= 2098, avg=2055.56, stdev=34.07, samples=9 00:37:40.494 lat (msec) : 2=0.08%, 4=63.56%, 10=36.36% 00:37:40.494 cpu : usr=97.26%, sys=2.48%, ctx=7, majf=0, minf=0 00:37:40.494 IO depths : 1=0.4%, 2=1.8%, 4=70.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.494 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.494 issued rwts: total=10289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.494 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:40.494 filename0: (groupid=0, jobs=1): err= 0: pid=701671: Sat Jun 8 01:02:58 2024 00:37:40.494 read: IOPS=1954, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5002msec) 00:37:40.494 slat (nsec): min=5637, max=32192, avg=6066.75, stdev=1086.33 00:37:40.494 clat (usec): min=1695, max=49940, avg=4077.03, stdev=1481.37 00:37:40.494 lat (usec): min=1701, max=49972, avg=4083.10, stdev=1481.61 00:37:40.494 clat percentiles (usec): 00:37:40.494 | 1.00th=[ 2737], 5.00th=[ 3097], 10.00th=[ 3261], 20.00th=[ 3490], 00:37:40.494 | 30.00th=[ 3654], 40.00th=[ 3785], 50.00th=[ 3949], 60.00th=[ 4080], 00:37:40.494 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5276], 00:37:40.494 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 7767], 99.95th=[50070], 00:37:40.494 | 99.99th=[50070] 00:37:40.494 bw ( KiB/s): min=14128, max=16096, per=23.61%, avg=15633.60, stdev=557.96, samples=10 00:37:40.494 iops : min= 1766, max= 2012, avg=1954.20, stdev=69.75, samples=10 00:37:40.494 lat (msec) : 2=0.03%, 4=54.06%, 10=45.83%, 50=0.08% 00:37:40.494 cpu : usr=96.90%, sys=2.76%, ctx=67, majf=0, minf=9 00:37:40.494 IO depths : 1=0.3%, 2=1.7%, 4=70.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.494 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.494 issued rwts: total=9774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.494 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:40.494 filename1: (groupid=0, jobs=1): err= 0: pid=701672: Sat Jun 8 01:02:58 2024 00:37:40.494 read: IOPS=2101, BW=16.4MiB/s (17.2MB/s)(82.1MiB/5002msec) 00:37:40.494 slat (nsec): min=5636, max=38758, avg=6325.12, stdev=1912.88 00:37:40.494 clat (usec): min=1917, max=6673, avg=3788.51, stdev=599.85 00:37:40.494 lat (usec): min=1923, max=6679, avg=3794.83, stdev=599.77 00:37:40.494 clat percentiles (usec): 00:37:40.494 | 1.00th=[ 2540], 5.00th=[ 2900], 10.00th=[ 3130], 20.00th=[ 3326], 00:37:40.494 | 30.00th=[ 3458], 40.00th=[ 3589], 50.00th=[ 3720], 60.00th=[ 3785], 00:37:40.494 | 70.00th=[ 4015], 80.00th=[ 4228], 90.00th=[ 4686], 95.00th=[ 4883], 00:37:40.494 | 99.00th=[ 5407], 99.50th=[ 5669], 99.90th=[ 6194], 99.95th=[ 6325], 00:37:40.494 | 99.99th=[ 6652] 00:37:40.494 bw ( KiB/s): min=16448, max=17264, per=25.39%, avg=16814.40, stdev=235.93, samples=10 00:37:40.494 iops : min= 2056, max= 2158, avg=2101.80, stdev=29.49, samples=10 00:37:40.494 lat (msec) : 2=0.02%, 4=69.95%, 10=30.03% 00:37:40.494 cpu : usr=97.14%, sys=2.62%, ctx=6, majf=0, minf=1 00:37:40.494 IO depths : 1=0.4%, 2=1.6%, 4=70.1%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.494 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.494 issued rwts: total=10512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.494 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:40.494 filename1: (groupid=0, jobs=1): err= 0: pid=701673: Sat Jun 8 01:02:58 2024 00:37:40.494 read: IOPS=2213, BW=17.3MiB/s (18.1MB/s)(87.2MiB/5042msec) 00:37:40.494 slat (nsec): min=8231, max=42011, avg=8912.65, stdev=1570.71 00:37:40.494 clat (usec): min=1166, max=41615, avg=3571.35, stdev=938.26 00:37:40.494 lat (usec): min=1175, max=41624, avg=3580.26, stdev=938.18 00:37:40.494 clat percentiles (usec): 00:37:40.494 | 1.00th=[ 2180], 5.00th=[ 2638], 10.00th=[ 2835], 20.00th=[ 3064], 00:37:40.494 | 30.00th=[ 3228], 40.00th=[ 3359], 50.00th=[ 3458], 60.00th=[ 3621], 00:37:40.494 | 70.00th=[ 3720], 80.00th=[ 3818], 90.00th=[ 4555], 95.00th=[ 5080], 00:37:40.494 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6259], 99.95th=[ 6325], 00:37:40.494 | 99.99th=[41681] 00:37:40.494 bw ( KiB/s): min=17600, max=18016, per=26.96%, avg=17852.80, stdev=114.59, samples=10 00:37:40.494 iops : min= 2200, max= 2252, avg=2231.60, stdev=14.32, samples=10 00:37:40.494 lat (msec) : 2=0.52%, 4=83.09%, 10=16.36%, 50=0.03% 00:37:40.494 cpu : usr=97.74%, sys=2.00%, ctx=7, majf=0, minf=9 00:37:40.494 IO depths : 1=0.1%, 2=0.4%, 4=72.1%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.494 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.494 issued rwts: total=11161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.494 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:40.494 00:37:40.494 Run status group 0 (all jobs): 00:37:40.494 READ: bw=64.7MiB/s (67.8MB/s), 15.3MiB/s-17.3MiB/s (16.0MB/s-18.1MB/s), io=326MiB (342MB), run=5001-5042msec 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:40.494 00:37:40.494 real 0m24.487s 00:37:40.494 user 5m14.853s 00:37:40.494 sys 0m4.399s 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:40.494 01:02:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.494 ************************************ 00:37:40.494 END TEST fio_dif_rand_params 00:37:40.494 ************************************ 00:37:40.494 01:02:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:40.494 01:02:58 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:40.494 01:02:58 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:40.494 01:02:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:40.494 ************************************ 00:37:40.494 START TEST fio_dif_digest 00:37:40.494 ************************************ 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.494 bdev_null0 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.494 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.495 [2024-06-08 01:02:58.518367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:40.495 { 00:37:40.495 "params": { 00:37:40.495 "name": "Nvme$subsystem", 00:37:40.495 "trtype": "$TEST_TRANSPORT", 00:37:40.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:40.495 "adrfam": "ipv4", 00:37:40.495 "trsvcid": "$NVMF_PORT", 00:37:40.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:40.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:40.495 "hdgst": ${hdgst:-false}, 00:37:40.495 "ddgst": ${ddgst:-false} 00:37:40.495 }, 00:37:40.495 "method": "bdev_nvme_attach_controller" 00:37:40.495 } 00:37:40.495 EOF 00:37:40.495 )") 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:40.495 "params": { 00:37:40.495 "name": "Nvme0", 00:37:40.495 "trtype": "tcp", 00:37:40.495 "traddr": "10.0.0.2", 00:37:40.495 "adrfam": "ipv4", 00:37:40.495 "trsvcid": "4420", 00:37:40.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:40.495 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:40.495 "hdgst": true, 00:37:40.495 "ddgst": true 00:37:40.495 }, 00:37:40.495 "method": "bdev_nvme_attach_controller" 00:37:40.495 }' 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:40.495 01:02:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:40.754 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:40.754 ... 00:37:40.754 fio-3.35 00:37:40.754 Starting 3 threads 00:37:40.754 EAL: No free 2048 kB hugepages reported on node 1 00:37:52.984 00:37:52.984 filename0: (groupid=0, jobs=1): err= 0: pid=703005: Sat Jun 8 01:03:09 2024 00:37:52.984 read: IOPS=141, BW=17.7MiB/s (18.5MB/s)(177MiB/10017msec) 00:37:52.984 slat (nsec): min=6040, max=45581, avg=6671.41, stdev=1260.64 00:37:52.984 clat (usec): min=7860, max=96752, avg=21211.30, stdev=17510.20 00:37:52.984 lat (usec): min=7867, max=96759, avg=21217.97, stdev=17510.22 00:37:52.984 clat percentiles (usec): 00:37:52.984 | 1.00th=[ 8356], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11469], 00:37:52.984 | 30.00th=[12387], 40.00th=[13173], 50.00th=[13960], 60.00th=[14484], 00:37:52.984 | 70.00th=[15139], 80.00th=[16909], 90.00th=[52691], 95.00th=[54264], 00:37:52.984 | 99.00th=[92799], 99.50th=[94897], 99.90th=[95945], 99.95th=[96994], 00:37:52.984 | 99.99th=[96994] 00:37:52.984 bw ( KiB/s): min=13824, max=22528, per=31.57%, avg=18086.40, stdev=2436.11, samples=20 00:37:52.984 iops : min= 108, max= 176, avg=141.30, stdev=19.03, samples=20 00:37:52.984 lat (msec) : 10=6.43%, 20=74.29%, 50=0.21%, 100=19.07% 00:37:52.984 cpu : usr=96.66%, sys=3.12%, ctx=10, majf=0, minf=99 00:37:52.984 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.984 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.984 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:52.984 filename0: (groupid=0, jobs=1): err= 0: pid=703006: Sat Jun 8 01:03:09 2024 00:37:52.984 read: IOPS=151, BW=19.0MiB/s (19.9MB/s)(190MiB/10011msec) 00:37:52.984 slat (nsec): min=6032, max=32090, avg=6650.79, stdev=1002.86 00:37:52.984 clat (usec): min=7470, max=96777, avg=19748.28, stdev=16591.53 00:37:52.984 lat (usec): min=7477, max=96783, avg=19754.93, stdev=16591.55 00:37:52.984 clat percentiles (usec): 00:37:52.984 | 1.00th=[ 8094], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10945], 00:37:52.984 | 30.00th=[11731], 40.00th=[12780], 50.00th=[13829], 60.00th=[14484], 00:37:52.984 | 70.00th=[15139], 80.00th=[16188], 90.00th=[52691], 95.00th=[54264], 00:37:52.984 | 99.00th=[92799], 99.50th=[93848], 99.90th=[93848], 99.95th=[96994], 00:37:52.984 | 99.99th=[96994] 00:37:52.984 bw ( KiB/s): min=13851, max=27904, per=33.90%, avg=19418.95, stdev=4011.74, samples=20 00:37:52.984 iops : min= 108, max= 218, avg=151.70, stdev=31.36, samples=20 00:37:52.984 lat (msec) : 10=11.18%, 20=72.76%, 50=0.26%, 100=15.79% 00:37:52.984 cpu : usr=96.48%, sys=3.30%, ctx=16, majf=0, minf=98 00:37:52.984 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.984 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.984 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:52.984 filename0: (groupid=0, jobs=1): err= 0: pid=703007: Sat Jun 8 01:03:09 2024 00:37:52.984 read: IOPS=155, BW=19.4MiB/s (20.4MB/s)(195MiB/10048msec) 00:37:52.984 slat (nsec): min=6028, max=32578, avg=6665.65, stdev=1191.66 00:37:52.984 clat (msec): min=7, max=132, avg=19.27, stdev=16.23 00:37:52.984 lat (msec): min=7, max=132, avg=19.28, stdev=16.23 00:37:52.984 clat percentiles (msec): 00:37:52.984 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:37:52.984 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:37:52.984 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 53], 95.00th=[ 54], 00:37:52.984 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 96], 99.95th=[ 132], 00:37:52.984 | 99.99th=[ 132] 00:37:52.984 bw ( KiB/s): min=11008, max=26368, per=34.83%, avg=19955.20, stdev=3943.67, samples=20 00:37:52.984 iops : min= 86, max= 206, avg=155.90, stdev=30.81, samples=20 00:37:52.984 lat (msec) : 10=10.38%, 20=74.18%, 50=0.58%, 100=14.80%, 250=0.06% 00:37:52.984 cpu : usr=96.35%, sys=3.42%, ctx=10, majf=0, minf=193 00:37:52.984 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.984 issued rwts: total=1561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.985 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:52.985 00:37:52.985 Run status group 0 (all jobs): 00:37:52.985 READ: bw=55.9MiB/s (58.7MB/s), 17.7MiB/s-19.4MiB/s (18.5MB/s-20.4MB/s), io=562MiB (589MB), run=10011-10048msec 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.985 00:37:52.985 real 0m11.150s 00:37:52.985 user 0m45.572s 00:37:52.985 sys 0m1.261s 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:52.985 01:03:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:52.985 ************************************ 00:37:52.985 END TEST fio_dif_digest 00:37:52.985 ************************************ 00:37:52.985 01:03:09 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:52.985 01:03:09 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:52.985 rmmod nvme_tcp 00:37:52.985 rmmod nvme_fabrics 00:37:52.985 rmmod nvme_keyring 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 692846 ']' 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 692846 00:37:52.985 01:03:09 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 692846 ']' 00:37:52.985 01:03:09 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 692846 00:37:52.985 01:03:09 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:37:52.985 01:03:09 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:52.985 01:03:09 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 692846 00:37:52.985 01:03:09 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:52.985 01:03:09 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:52.985 01:03:09 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 692846' 00:37:52.985 killing process with pid 692846 00:37:52.985 01:03:09 nvmf_dif -- common/autotest_common.sh@968 -- # kill 692846 00:37:52.985 01:03:09 nvmf_dif -- common/autotest_common.sh@973 -- # wait 692846 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:52.985 01:03:09 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:54.895 Waiting for block devices as requested 00:37:54.895 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:55.156 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:55.156 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:55.156 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:55.416 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:55.416 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:55.416 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:55.677 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:55.677 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:55.677 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:55.937 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:55.937 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:55.937 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:56.197 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:56.197 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:56.197 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:56.197 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:56.458 01:03:14 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:56.458 01:03:14 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:56.458 01:03:14 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:56.458 01:03:14 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:56.458 01:03:14 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.458 01:03:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:56.458 01:03:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.000 01:03:16 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:59.000 00:37:59.000 real 1m16.272s 00:37:59.000 user 8m3.371s 00:37:59.000 sys 0m18.833s 00:37:59.000 01:03:16 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:59.000 01:03:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:59.000 ************************************ 00:37:59.000 END TEST nvmf_dif 00:37:59.000 ************************************ 00:37:59.000 01:03:16 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:59.000 01:03:16 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:59.000 01:03:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:59.000 01:03:16 -- common/autotest_common.sh@10 -- # set +x 00:37:59.000 ************************************ 00:37:59.000 START TEST nvmf_abort_qd_sizes 00:37:59.000 ************************************ 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:59.000 * Looking for test storage... 00:37:59.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:59.000 01:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:37:59.000 01:03:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:38:05.639 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:05.640 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:05.640 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:05.640 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:05.640 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:05.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:05.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:38:05.640 00:38:05.640 --- 10.0.0.2 ping statistics --- 00:38:05.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.640 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:05.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:05.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:38:05.640 00:38:05.640 --- 10.0.0.1 ping statistics --- 00:38:05.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.640 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:05.640 01:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:08.943 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:08.943 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:09.203 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:09.203 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:09.203 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=712922 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 712922 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 712922 ']' 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:09.463 01:03:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:09.463 [2024-06-08 01:03:27.693223] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:38:09.463 [2024-06-08 01:03:27.693269] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:09.463 EAL: No free 2048 kB hugepages reported on node 1 00:38:09.724 [2024-06-08 01:03:27.756656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:09.724 [2024-06-08 01:03:27.822413] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:09.724 [2024-06-08 01:03:27.822447] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:09.724 [2024-06-08 01:03:27.822455] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:09.724 [2024-06-08 01:03:27.822461] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:09.724 [2024-06-08 01:03:27.822466] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:09.724 [2024-06-08 01:03:27.822532] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:09.724 [2024-06-08 01:03:27.822645] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:38:09.724 [2024-06-08 01:03:27.822800] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.724 [2024-06-08 01:03:27.822802] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:10.295 01:03:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:10.295 ************************************ 00:38:10.295 START TEST spdk_target_abort 00:38:10.295 ************************************ 00:38:10.295 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:38:10.295 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:10.295 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:10.295 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.295 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:10.867 spdk_targetn1 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:10.867 [2024-06-08 01:03:28.877503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:10.867 [2024-06-08 01:03:28.917778] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:10.867 01:03:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:10.867 EAL: No free 2048 kB hugepages reported on node 1 00:38:10.867 [2024-06-08 01:03:29.081842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:248 len:8 PRP1 0x2000078be000 PRP2 0x0 00:38:10.867 [2024-06-08 01:03:29.081868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0021 p:1 m:0 dnr:0 00:38:10.867 [2024-06-08 01:03:29.144886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2272 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:38:10.867 [2024-06-08 01:03:29.144906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:11.127 [2024-06-08 01:03:29.163475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2944 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:38:11.127 [2024-06-08 01:03:29.163492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:11.127 [2024-06-08 01:03:29.199923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3952 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:38:11.127 [2024-06-08 01:03:29.199941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00ef p:0 m:0 dnr:0 00:38:14.430 Initializing NVMe Controllers 00:38:14.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:14.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:14.430 Initialization complete. Launching workers. 00:38:14.430 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11219, failed: 4 00:38:14.430 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3130, failed to submit 8093 00:38:14.430 success 773, unsuccess 2357, failed 0 00:38:14.430 01:03:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:14.430 01:03:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:14.430 EAL: No free 2048 kB hugepages reported on node 1 00:38:14.430 [2024-06-08 01:03:32.278669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:888 len:8 PRP1 0x200007c52000 PRP2 0x0 00:38:14.430 [2024-06-08 01:03:32.278709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:38:14.430 [2024-06-08 01:03:32.293554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:1192 len:8 PRP1 0x200007c54000 PRP2 0x0 00:38:14.430 [2024-06-08 01:03:32.293578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:38:14.430 [2024-06-08 01:03:32.309513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:1584 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:38:14.430 [2024-06-08 01:03:32.309537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00cc p:1 m:0 dnr:0 00:38:14.430 [2024-06-08 01:03:32.317488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:1728 len:8 PRP1 0x200007c40000 PRP2 0x0 00:38:14.430 [2024-06-08 01:03:32.317510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:00e0 p:1 m:0 dnr:0 00:38:14.430 [2024-06-08 01:03:32.372441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:3056 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:38:14.430 [2024-06-08 01:03:32.372465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0080 p:0 m:0 dnr:0 00:38:17.730 Initializing NVMe Controllers 00:38:17.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:17.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:17.730 Initialization complete. Launching workers. 00:38:17.730 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8589, failed: 5 00:38:17.730 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 7375 00:38:17.730 success 382, unsuccess 837, failed 0 00:38:17.730 01:03:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:17.730 01:03:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:17.730 EAL: No free 2048 kB hugepages reported on node 1 00:38:19.115 [2024-06-08 01:03:37.107840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:187 nsid:1 lba:168344 len:8 PRP1 0x200007926000 PRP2 0x0 00:38:19.115 [2024-06-08 01:03:37.107875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:187 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:38:20.501 Initializing NVMe Controllers 00:38:20.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:20.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:20.501 Initialization complete. Launching workers. 00:38:20.501 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43073, failed: 1 00:38:20.501 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2337, failed to submit 40737 00:38:20.501 success 588, unsuccess 1749, failed 0 00:38:20.501 01:03:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:20.501 01:03:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.501 01:03:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:20.501 01:03:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.501 01:03:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:20.501 01:03:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.501 01:03:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 712922 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 712922 ']' 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 712922 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 712922 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 712922' 00:38:22.416 killing process with pid 712922 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 712922 00:38:22.416 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 712922 00:38:22.677 00:38:22.677 real 0m12.144s 00:38:22.677 user 0m49.246s 00:38:22.677 sys 0m1.996s 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:22.677 ************************************ 00:38:22.677 END TEST spdk_target_abort 00:38:22.677 ************************************ 00:38:22.677 01:03:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:22.677 01:03:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:22.677 01:03:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:22.677 01:03:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:22.677 ************************************ 00:38:22.677 START TEST kernel_target_abort 00:38:22.677 ************************************ 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:22.677 01:03:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:25.978 Waiting for block devices as requested 00:38:25.978 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:25.978 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:25.978 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:25.978 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:25.978 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:26.239 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:26.239 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:26.239 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:26.503 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:26.503 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:26.503 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:26.802 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:26.802 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:26.802 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:26.802 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:27.063 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:27.063 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:27.324 No valid GPT data, bailing 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:27.324 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:27.585 00:38:27.585 Discovery Log Number of Records 2, Generation counter 2 00:38:27.585 =====Discovery Log Entry 0====== 00:38:27.585 trtype: tcp 00:38:27.585 adrfam: ipv4 00:38:27.585 subtype: current discovery subsystem 00:38:27.585 treq: not specified, sq flow control disable supported 00:38:27.585 portid: 1 00:38:27.585 trsvcid: 4420 00:38:27.585 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:27.585 traddr: 10.0.0.1 00:38:27.585 eflags: none 00:38:27.585 sectype: none 00:38:27.585 =====Discovery Log Entry 1====== 00:38:27.585 trtype: tcp 00:38:27.585 adrfam: ipv4 00:38:27.585 subtype: nvme subsystem 00:38:27.585 treq: not specified, sq flow control disable supported 00:38:27.585 portid: 1 00:38:27.585 trsvcid: 4420 00:38:27.585 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:27.585 traddr: 10.0.0.1 00:38:27.585 eflags: none 00:38:27.585 sectype: none 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:27.585 01:03:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:27.585 EAL: No free 2048 kB hugepages reported on node 1 00:38:30.885 Initializing NVMe Controllers 00:38:30.885 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:30.885 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:30.885 Initialization complete. Launching workers. 00:38:30.885 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 50882, failed: 0 00:38:30.885 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 50882, failed to submit 0 00:38:30.885 success 0, unsuccess 50882, failed 0 00:38:30.885 01:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:30.885 01:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:30.885 EAL: No free 2048 kB hugepages reported on node 1 00:38:34.184 Initializing NVMe Controllers 00:38:34.184 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:34.184 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:34.184 Initialization complete. Launching workers. 00:38:34.184 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90696, failed: 0 00:38:34.184 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22850, failed to submit 67846 00:38:34.184 success 0, unsuccess 22850, failed 0 00:38:34.184 01:03:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:34.184 01:03:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:34.184 EAL: No free 2048 kB hugepages reported on node 1 00:38:36.727 Initializing NVMe Controllers 00:38:36.727 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:36.727 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:36.727 Initialization complete. Launching workers. 00:38:36.727 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87631, failed: 0 00:38:36.727 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21906, failed to submit 65725 00:38:36.727 success 0, unsuccess 21906, failed 0 00:38:36.727 01:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:36.727 01:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:36.727 01:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:38:36.727 01:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:36.727 01:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:36.727 01:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:36.727 01:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:36.727 01:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:36.727 01:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:36.727 01:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:40.028 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:40.028 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:40.028 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:40.028 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:40.289 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:42.204 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:42.465 00:38:42.465 real 0m19.746s 00:38:42.465 user 0m8.119s 00:38:42.465 sys 0m6.220s 00:38:42.465 01:04:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:42.465 01:04:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:42.465 ************************************ 00:38:42.465 END TEST kernel_target_abort 00:38:42.465 ************************************ 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:42.465 rmmod nvme_tcp 00:38:42.465 rmmod nvme_fabrics 00:38:42.465 rmmod nvme_keyring 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 712922 ']' 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 712922 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 712922 ']' 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 712922 00:38:42.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (712922) - No such process 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 712922 is not found' 00:38:42.465 Process with pid 712922 is not found 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:42.465 01:04:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:45.770 Waiting for block devices as requested 00:38:45.770 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:45.770 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:46.031 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:46.031 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:46.031 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:46.292 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:46.292 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:46.292 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:46.292 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:46.553 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:46.553 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:46.813 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:46.813 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:46.813 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:47.073 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:47.073 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:47.073 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:47.334 01:04:05 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:47.334 01:04:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:47.334 01:04:05 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:47.334 01:04:05 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:47.334 01:04:05 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:47.334 01:04:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:47.334 01:04:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.909 01:04:07 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:49.909 00:38:49.909 real 0m50.705s 00:38:49.909 user 1m2.428s 00:38:49.909 sys 0m18.598s 00:38:49.910 01:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:49.910 01:04:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:49.910 ************************************ 00:38:49.910 END TEST nvmf_abort_qd_sizes 00:38:49.910 ************************************ 00:38:49.910 01:04:07 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:49.910 01:04:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:49.910 01:04:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:49.910 01:04:07 -- common/autotest_common.sh@10 -- # set +x 00:38:49.910 ************************************ 00:38:49.910 START TEST keyring_file 00:38:49.910 ************************************ 00:38:49.910 01:04:07 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:49.910 * Looking for test storage... 00:38:49.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:49.910 01:04:07 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:49.910 01:04:07 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:49.910 01:04:07 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:49.910 01:04:07 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.910 01:04:07 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.910 01:04:07 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.910 01:04:07 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:49.910 01:04:07 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@47 -- # : 0 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oY4JCnRp5v 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oY4JCnRp5v 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oY4JCnRp5v 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.oY4JCnRp5v 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NRzsDfc2gq 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:49.910 01:04:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NRzsDfc2gq 00:38:49.910 01:04:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NRzsDfc2gq 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.NRzsDfc2gq 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@30 -- # tgtpid=722987 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@32 -- # waitforlisten 722987 00:38:49.910 01:04:07 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:49.910 01:04:07 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 722987 ']' 00:38:49.910 01:04:07 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:49.910 01:04:07 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:49.910 01:04:07 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:49.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:49.910 01:04:07 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:49.910 01:04:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:49.910 [2024-06-08 01:04:07.959803] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:38:49.910 [2024-06-08 01:04:07.959887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722987 ] 00:38:49.910 EAL: No free 2048 kB hugepages reported on node 1 00:38:49.910 [2024-06-08 01:04:08.025916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.910 [2024-06-08 01:04:08.101256] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.481 01:04:08 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:50.481 01:04:08 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:38:50.481 01:04:08 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:50.481 01:04:08 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.481 01:04:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:50.481 [2024-06-08 01:04:08.728791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:50.481 null0 00:38:50.481 [2024-06-08 01:04:08.760841] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:50.481 [2024-06-08 01:04:08.761098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:50.742 [2024-06-08 01:04:08.768851] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.743 01:04:08 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:50.743 [2024-06-08 01:04:08.784892] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:50.743 request: 00:38:50.743 { 00:38:50.743 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:50.743 "secure_channel": false, 00:38:50.743 "listen_address": { 00:38:50.743 "trtype": "tcp", 00:38:50.743 "traddr": "127.0.0.1", 00:38:50.743 "trsvcid": "4420" 00:38:50.743 }, 00:38:50.743 "method": "nvmf_subsystem_add_listener", 00:38:50.743 "req_id": 1 00:38:50.743 } 00:38:50.743 Got JSON-RPC error response 00:38:50.743 response: 00:38:50.743 { 00:38:50.743 "code": -32602, 00:38:50.743 "message": "Invalid parameters" 00:38:50.743 } 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:50.743 01:04:08 keyring_file -- keyring/file.sh@46 -- # bperfpid=723216 00:38:50.743 01:04:08 keyring_file -- keyring/file.sh@48 -- # waitforlisten 723216 /var/tmp/bperf.sock 00:38:50.743 01:04:08 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 723216 ']' 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:50.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:50.743 01:04:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:50.743 [2024-06-08 01:04:08.839488] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:38:50.743 [2024-06-08 01:04:08.839533] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723216 ] 00:38:50.743 EAL: No free 2048 kB hugepages reported on node 1 00:38:50.743 [2024-06-08 01:04:08.912361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.743 [2024-06-08 01:04:08.976498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:51.314 01:04:09 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:51.314 01:04:09 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:38:51.314 01:04:09 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oY4JCnRp5v 00:38:51.314 01:04:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oY4JCnRp5v 00:38:51.574 01:04:09 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NRzsDfc2gq 00:38:51.574 01:04:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NRzsDfc2gq 00:38:51.835 01:04:09 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:38:51.835 01:04:09 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:38:51.835 01:04:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:51.835 01:04:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:51.835 01:04:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.835 01:04:10 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.oY4JCnRp5v == \/\t\m\p\/\t\m\p\.\o\Y\4\J\C\n\R\p\5\v ]] 00:38:51.835 01:04:10 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:38:51.835 01:04:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:51.835 01:04:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:51.835 01:04:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.835 01:04:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:52.096 01:04:10 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.NRzsDfc2gq == \/\t\m\p\/\t\m\p\.\N\R\z\s\D\f\c\2\g\q ]] 00:38:52.096 01:04:10 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:38:52.096 01:04:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:52.096 01:04:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:52.096 01:04:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:52.096 01:04:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:52.096 01:04:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.096 01:04:10 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:38:52.096 01:04:10 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:38:52.096 01:04:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:52.096 01:04:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:52.096 01:04:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:52.358 01:04:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:52.358 01:04:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.358 01:04:10 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:52.358 01:04:10 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.358 01:04:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.619 [2024-06-08 01:04:10.672782] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:52.619 nvme0n1 00:38:52.619 01:04:10 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:38:52.619 01:04:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:52.619 01:04:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:52.619 01:04:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:52.620 01:04:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.620 01:04:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:52.880 01:04:10 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:38:52.880 01:04:10 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:38:52.880 01:04:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:52.880 01:04:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:52.880 01:04:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:52.880 01:04:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:52.880 01:04:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.880 01:04:11 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:38:52.880 01:04:11 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:53.142 Running I/O for 1 seconds... 00:38:54.083 00:38:54.083 Latency(us) 00:38:54.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:54.083 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:54.083 nvme0n1 : 1.02 7465.78 29.16 0.00 0.00 17005.11 9502.72 25668.27 00:38:54.083 =================================================================================================================== 00:38:54.083 Total : 7465.78 29.16 0.00 0.00 17005.11 9502.72 25668.27 00:38:54.083 0 00:38:54.083 01:04:12 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:54.083 01:04:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:54.344 01:04:12 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:38:54.344 01:04:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:54.344 01:04:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:54.344 01:04:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.344 01:04:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:54.345 01:04:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.345 01:04:12 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:38:54.345 01:04:12 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:38:54.345 01:04:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:54.345 01:04:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:54.345 01:04:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.345 01:04:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:54.345 01:04:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.605 01:04:12 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:54.605 01:04:12 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:54.605 01:04:12 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:38:54.605 01:04:12 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:54.605 01:04:12 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:38:54.605 01:04:12 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:54.605 01:04:12 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:38:54.605 01:04:12 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:54.605 01:04:12 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:54.605 01:04:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:54.605 [2024-06-08 01:04:12.870641] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:54.605 [2024-06-08 01:04:12.871324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ae520 (107): Transport endpoint is not connected 00:38:54.606 [2024-06-08 01:04:12.872321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ae520 (9): Bad file descriptor 00:38:54.606 [2024-06-08 01:04:12.873322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:54.606 [2024-06-08 01:04:12.873330] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:54.606 [2024-06-08 01:04:12.873335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:54.606 request: 00:38:54.606 { 00:38:54.606 "name": "nvme0", 00:38:54.606 "trtype": "tcp", 00:38:54.606 "traddr": "127.0.0.1", 00:38:54.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:54.606 "adrfam": "ipv4", 00:38:54.606 "trsvcid": "4420", 00:38:54.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.606 "psk": "key1", 00:38:54.606 "method": "bdev_nvme_attach_controller", 00:38:54.606 "req_id": 1 00:38:54.606 } 00:38:54.606 Got JSON-RPC error response 00:38:54.606 response: 00:38:54.606 { 00:38:54.606 "code": -5, 00:38:54.606 "message": "Input/output error" 00:38:54.606 } 00:38:54.606 01:04:12 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:38:54.606 01:04:12 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:54.606 01:04:12 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:54.606 01:04:12 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:54.866 01:04:12 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:38:54.866 01:04:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:54.866 01:04:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:54.866 01:04:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.866 01:04:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:54.866 01:04:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.866 01:04:13 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:38:54.866 01:04:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:38:54.866 01:04:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:54.866 01:04:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:54.866 01:04:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.866 01:04:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:54.866 01:04:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:55.127 01:04:13 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:55.128 01:04:13 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:38:55.128 01:04:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:55.128 01:04:13 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:38:55.128 01:04:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:55.388 01:04:13 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:38:55.388 01:04:13 keyring_file -- keyring/file.sh@77 -- # jq length 00:38:55.388 01:04:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:55.388 01:04:13 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:38:55.388 01:04:13 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.oY4JCnRp5v 00:38:55.388 01:04:13 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.oY4JCnRp5v 00:38:55.389 01:04:13 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:38:55.389 01:04:13 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.oY4JCnRp5v 00:38:55.389 01:04:13 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:38:55.649 01:04:13 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:55.649 01:04:13 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:38:55.649 01:04:13 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:55.649 01:04:13 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oY4JCnRp5v 00:38:55.649 01:04:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oY4JCnRp5v 00:38:55.649 [2024-06-08 01:04:13.810090] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oY4JCnRp5v': 0100660 00:38:55.649 [2024-06-08 01:04:13.810108] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:55.649 request: 00:38:55.649 { 00:38:55.649 "name": "key0", 00:38:55.649 "path": "/tmp/tmp.oY4JCnRp5v", 00:38:55.650 "method": "keyring_file_add_key", 00:38:55.650 "req_id": 1 00:38:55.650 } 00:38:55.650 Got JSON-RPC error response 00:38:55.650 response: 00:38:55.650 { 00:38:55.650 "code": -1, 00:38:55.650 "message": "Operation not permitted" 00:38:55.650 } 00:38:55.650 01:04:13 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:38:55.650 01:04:13 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:55.650 01:04:13 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:55.650 01:04:13 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:55.650 01:04:13 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.oY4JCnRp5v 00:38:55.650 01:04:13 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oY4JCnRp5v 00:38:55.650 01:04:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oY4JCnRp5v 00:38:55.910 01:04:13 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.oY4JCnRp5v 00:38:55.910 01:04:13 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:38:55.910 01:04:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:55.910 01:04:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:55.910 01:04:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:55.910 01:04:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:55.910 01:04:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:55.910 01:04:14 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:38:55.910 01:04:14 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:55.910 01:04:14 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:38:55.910 01:04:14 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:55.910 01:04:14 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:38:55.910 01:04:14 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:55.910 01:04:14 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:38:55.910 01:04:14 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:55.910 01:04:14 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:55.910 01:04:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:56.170 [2024-06-08 01:04:14.287291] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.oY4JCnRp5v': No such file or directory 00:38:56.171 [2024-06-08 01:04:14.287305] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:56.171 [2024-06-08 01:04:14.287321] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:56.171 [2024-06-08 01:04:14.287326] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:56.171 [2024-06-08 01:04:14.287331] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:56.171 request: 00:38:56.171 { 00:38:56.171 "name": "nvme0", 00:38:56.171 "trtype": "tcp", 00:38:56.171 "traddr": "127.0.0.1", 00:38:56.171 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:56.171 "adrfam": "ipv4", 00:38:56.171 "trsvcid": "4420", 00:38:56.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:56.171 "psk": "key0", 00:38:56.171 "method": "bdev_nvme_attach_controller", 00:38:56.171 "req_id": 1 00:38:56.171 } 00:38:56.171 Got JSON-RPC error response 00:38:56.171 response: 00:38:56.171 { 00:38:56.171 "code": -19, 00:38:56.171 "message": "No such device" 00:38:56.171 } 00:38:56.171 01:04:14 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:38:56.171 01:04:14 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:56.171 01:04:14 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:56.171 01:04:14 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:56.171 01:04:14 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:38:56.171 01:04:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:56.431 01:04:14 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OXhewCNadl 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:56.431 01:04:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:56.431 01:04:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:56.431 01:04:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:56.431 01:04:14 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:56.431 01:04:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:56.431 01:04:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OXhewCNadl 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OXhewCNadl 00:38:56.431 01:04:14 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.OXhewCNadl 00:38:56.431 01:04:14 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OXhewCNadl 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OXhewCNadl 00:38:56.431 01:04:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:56.431 01:04:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:56.692 nvme0n1 00:38:56.692 01:04:14 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:38:56.692 01:04:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:56.692 01:04:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:56.692 01:04:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:56.692 01:04:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:56.692 01:04:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.953 01:04:15 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:38:56.953 01:04:15 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:38:56.953 01:04:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:56.953 01:04:15 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:38:56.953 01:04:15 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:38:56.953 01:04:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:56.953 01:04:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.953 01:04:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:57.214 01:04:15 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:38:57.214 01:04:15 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:38:57.214 01:04:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:57.214 01:04:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:57.214 01:04:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:57.214 01:04:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:57.214 01:04:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:57.475 01:04:15 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:38:57.475 01:04:15 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:57.475 01:04:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:57.475 01:04:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:38:57.475 01:04:15 keyring_file -- keyring/file.sh@104 -- # jq length 00:38:57.475 01:04:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:57.754 01:04:15 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:38:57.754 01:04:15 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OXhewCNadl 00:38:57.754 01:04:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OXhewCNadl 00:38:58.014 01:04:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NRzsDfc2gq 00:38:58.014 01:04:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NRzsDfc2gq 00:38:58.014 01:04:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:58.014 01:04:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:58.274 nvme0n1 00:38:58.274 01:04:16 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:38:58.274 01:04:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:58.534 01:04:16 keyring_file -- keyring/file.sh@112 -- # config='{ 00:38:58.534 "subsystems": [ 00:38:58.534 { 00:38:58.534 "subsystem": "keyring", 00:38:58.534 "config": [ 00:38:58.534 { 00:38:58.534 "method": "keyring_file_add_key", 00:38:58.534 "params": { 00:38:58.534 "name": "key0", 00:38:58.534 "path": "/tmp/tmp.OXhewCNadl" 00:38:58.534 } 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "method": "keyring_file_add_key", 00:38:58.534 "params": { 00:38:58.534 "name": "key1", 00:38:58.534 "path": "/tmp/tmp.NRzsDfc2gq" 00:38:58.534 } 00:38:58.534 } 00:38:58.534 ] 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "subsystem": "iobuf", 00:38:58.534 "config": [ 00:38:58.534 { 00:38:58.534 "method": "iobuf_set_options", 00:38:58.534 "params": { 00:38:58.534 "small_pool_count": 8192, 00:38:58.534 "large_pool_count": 1024, 00:38:58.534 "small_bufsize": 8192, 00:38:58.534 "large_bufsize": 135168 00:38:58.534 } 00:38:58.534 } 00:38:58.534 ] 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "subsystem": "sock", 00:38:58.534 "config": [ 00:38:58.534 { 00:38:58.534 "method": "sock_set_default_impl", 00:38:58.534 "params": { 00:38:58.534 "impl_name": "posix" 00:38:58.534 } 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "method": "sock_impl_set_options", 00:38:58.534 "params": { 00:38:58.534 "impl_name": "ssl", 00:38:58.534 "recv_buf_size": 4096, 00:38:58.534 "send_buf_size": 4096, 00:38:58.534 "enable_recv_pipe": true, 00:38:58.534 "enable_quickack": false, 00:38:58.534 "enable_placement_id": 0, 00:38:58.534 "enable_zerocopy_send_server": true, 00:38:58.534 "enable_zerocopy_send_client": false, 00:38:58.534 "zerocopy_threshold": 0, 00:38:58.534 "tls_version": 0, 00:38:58.534 "enable_ktls": false 00:38:58.534 } 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "method": "sock_impl_set_options", 00:38:58.534 "params": { 00:38:58.534 "impl_name": "posix", 00:38:58.534 "recv_buf_size": 2097152, 00:38:58.534 "send_buf_size": 2097152, 00:38:58.534 "enable_recv_pipe": true, 00:38:58.534 "enable_quickack": false, 00:38:58.534 "enable_placement_id": 0, 00:38:58.534 "enable_zerocopy_send_server": true, 00:38:58.534 "enable_zerocopy_send_client": false, 00:38:58.534 "zerocopy_threshold": 0, 00:38:58.534 "tls_version": 0, 00:38:58.534 "enable_ktls": false 00:38:58.534 } 00:38:58.534 } 00:38:58.534 ] 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "subsystem": "vmd", 00:38:58.534 "config": [] 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "subsystem": "accel", 00:38:58.534 "config": [ 00:38:58.534 { 00:38:58.534 "method": "accel_set_options", 00:38:58.534 "params": { 00:38:58.534 "small_cache_size": 128, 00:38:58.534 "large_cache_size": 16, 00:38:58.534 "task_count": 2048, 00:38:58.534 "sequence_count": 2048, 00:38:58.534 "buf_count": 2048 00:38:58.534 } 00:38:58.534 } 00:38:58.534 ] 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "subsystem": "bdev", 00:38:58.534 "config": [ 00:38:58.534 { 00:38:58.534 "method": "bdev_set_options", 00:38:58.534 "params": { 00:38:58.534 "bdev_io_pool_size": 65535, 00:38:58.534 "bdev_io_cache_size": 256, 00:38:58.534 "bdev_auto_examine": true, 00:38:58.534 "iobuf_small_cache_size": 128, 00:38:58.534 "iobuf_large_cache_size": 16 00:38:58.534 } 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "method": "bdev_raid_set_options", 00:38:58.534 "params": { 00:38:58.534 "process_window_size_kb": 1024 00:38:58.534 } 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "method": "bdev_iscsi_set_options", 00:38:58.534 "params": { 00:38:58.534 "timeout_sec": 30 00:38:58.534 } 00:38:58.534 }, 00:38:58.534 { 00:38:58.534 "method": "bdev_nvme_set_options", 00:38:58.534 "params": { 00:38:58.534 "action_on_timeout": "none", 00:38:58.534 "timeout_us": 0, 00:38:58.534 "timeout_admin_us": 0, 00:38:58.534 "keep_alive_timeout_ms": 10000, 00:38:58.534 "arbitration_burst": 0, 00:38:58.534 "low_priority_weight": 0, 00:38:58.534 "medium_priority_weight": 0, 00:38:58.534 "high_priority_weight": 0, 00:38:58.534 "nvme_adminq_poll_period_us": 10000, 00:38:58.534 "nvme_ioq_poll_period_us": 0, 00:38:58.534 "io_queue_requests": 512, 00:38:58.534 "delay_cmd_submit": true, 00:38:58.534 "transport_retry_count": 4, 00:38:58.534 "bdev_retry_count": 3, 00:38:58.534 "transport_ack_timeout": 0, 00:38:58.534 "ctrlr_loss_timeout_sec": 0, 00:38:58.534 "reconnect_delay_sec": 0, 00:38:58.534 "fast_io_fail_timeout_sec": 0, 00:38:58.534 "disable_auto_failback": false, 00:38:58.534 "generate_uuids": false, 00:38:58.534 "transport_tos": 0, 00:38:58.534 "nvme_error_stat": false, 00:38:58.534 "rdma_srq_size": 0, 00:38:58.534 "io_path_stat": false, 00:38:58.534 "allow_accel_sequence": false, 00:38:58.534 "rdma_max_cq_size": 0, 00:38:58.534 "rdma_cm_event_timeout_ms": 0, 00:38:58.534 "dhchap_digests": [ 00:38:58.534 "sha256", 00:38:58.534 "sha384", 00:38:58.534 "sha512" 00:38:58.534 ], 00:38:58.534 "dhchap_dhgroups": [ 00:38:58.534 "null", 00:38:58.534 "ffdhe2048", 00:38:58.534 "ffdhe3072", 00:38:58.534 "ffdhe4096", 00:38:58.534 "ffdhe6144", 00:38:58.534 "ffdhe8192" 00:38:58.535 ] 00:38:58.535 } 00:38:58.535 }, 00:38:58.535 { 00:38:58.535 "method": "bdev_nvme_attach_controller", 00:38:58.535 "params": { 00:38:58.535 "name": "nvme0", 00:38:58.535 "trtype": "TCP", 00:38:58.535 "adrfam": "IPv4", 00:38:58.535 "traddr": "127.0.0.1", 00:38:58.535 "trsvcid": "4420", 00:38:58.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:58.535 "prchk_reftag": false, 00:38:58.535 "prchk_guard": false, 00:38:58.535 "ctrlr_loss_timeout_sec": 0, 00:38:58.535 "reconnect_delay_sec": 0, 00:38:58.535 "fast_io_fail_timeout_sec": 0, 00:38:58.535 "psk": "key0", 00:38:58.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:58.535 "hdgst": false, 00:38:58.535 "ddgst": false 00:38:58.535 } 00:38:58.535 }, 00:38:58.535 { 00:38:58.535 "method": "bdev_nvme_set_hotplug", 00:38:58.535 "params": { 00:38:58.535 "period_us": 100000, 00:38:58.535 "enable": false 00:38:58.535 } 00:38:58.535 }, 00:38:58.535 { 00:38:58.535 "method": "bdev_wait_for_examine" 00:38:58.535 } 00:38:58.535 ] 00:38:58.535 }, 00:38:58.535 { 00:38:58.535 "subsystem": "nbd", 00:38:58.535 "config": [] 00:38:58.535 } 00:38:58.535 ] 00:38:58.535 }' 00:38:58.535 01:04:16 keyring_file -- keyring/file.sh@114 -- # killprocess 723216 00:38:58.535 01:04:16 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 723216 ']' 00:38:58.535 01:04:16 keyring_file -- common/autotest_common.sh@953 -- # kill -0 723216 00:38:58.535 01:04:16 keyring_file -- common/autotest_common.sh@954 -- # uname 00:38:58.535 01:04:16 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:58.535 01:04:16 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 723216 00:38:58.535 01:04:16 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:58.535 01:04:16 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:58.535 01:04:16 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 723216' 00:38:58.535 killing process with pid 723216 00:38:58.535 01:04:16 keyring_file -- common/autotest_common.sh@968 -- # kill 723216 00:38:58.535 Received shutdown signal, test time was about 1.000000 seconds 00:38:58.535 00:38:58.535 Latency(us) 00:38:58.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:58.535 =================================================================================================================== 00:38:58.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:58.535 01:04:16 keyring_file -- common/autotest_common.sh@973 -- # wait 723216 00:38:58.795 01:04:16 keyring_file -- keyring/file.sh@117 -- # bperfpid=724761 00:38:58.795 01:04:16 keyring_file -- keyring/file.sh@119 -- # waitforlisten 724761 /var/tmp/bperf.sock 00:38:58.795 01:04:16 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 724761 ']' 00:38:58.795 01:04:16 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:58.796 01:04:16 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:58.796 01:04:16 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:58.796 01:04:16 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:58.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:58.796 01:04:16 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:58.796 01:04:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:58.796 01:04:16 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:38:58.796 "subsystems": [ 00:38:58.796 { 00:38:58.796 "subsystem": "keyring", 00:38:58.796 "config": [ 00:38:58.796 { 00:38:58.796 "method": "keyring_file_add_key", 00:38:58.796 "params": { 00:38:58.796 "name": "key0", 00:38:58.796 "path": "/tmp/tmp.OXhewCNadl" 00:38:58.796 } 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "method": "keyring_file_add_key", 00:38:58.796 "params": { 00:38:58.796 "name": "key1", 00:38:58.796 "path": "/tmp/tmp.NRzsDfc2gq" 00:38:58.796 } 00:38:58.796 } 00:38:58.796 ] 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "subsystem": "iobuf", 00:38:58.796 "config": [ 00:38:58.796 { 00:38:58.796 "method": "iobuf_set_options", 00:38:58.796 "params": { 00:38:58.796 "small_pool_count": 8192, 00:38:58.796 "large_pool_count": 1024, 00:38:58.796 "small_bufsize": 8192, 00:38:58.796 "large_bufsize": 135168 00:38:58.796 } 00:38:58.796 } 00:38:58.796 ] 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "subsystem": "sock", 00:38:58.796 "config": [ 00:38:58.796 { 00:38:58.796 "method": "sock_set_default_impl", 00:38:58.796 "params": { 00:38:58.796 "impl_name": "posix" 00:38:58.796 } 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "method": "sock_impl_set_options", 00:38:58.796 "params": { 00:38:58.796 "impl_name": "ssl", 00:38:58.796 "recv_buf_size": 4096, 00:38:58.796 "send_buf_size": 4096, 00:38:58.796 "enable_recv_pipe": true, 00:38:58.796 "enable_quickack": false, 00:38:58.796 "enable_placement_id": 0, 00:38:58.796 "enable_zerocopy_send_server": true, 00:38:58.796 "enable_zerocopy_send_client": false, 00:38:58.796 "zerocopy_threshold": 0, 00:38:58.796 "tls_version": 0, 00:38:58.796 "enable_ktls": false 00:38:58.796 } 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "method": "sock_impl_set_options", 00:38:58.796 "params": { 00:38:58.796 "impl_name": "posix", 00:38:58.796 "recv_buf_size": 2097152, 00:38:58.796 "send_buf_size": 2097152, 00:38:58.796 "enable_recv_pipe": true, 00:38:58.796 "enable_quickack": false, 00:38:58.796 "enable_placement_id": 0, 00:38:58.796 "enable_zerocopy_send_server": true, 00:38:58.796 "enable_zerocopy_send_client": false, 00:38:58.796 "zerocopy_threshold": 0, 00:38:58.796 "tls_version": 0, 00:38:58.796 "enable_ktls": false 00:38:58.796 } 00:38:58.796 } 00:38:58.796 ] 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "subsystem": "vmd", 00:38:58.796 "config": [] 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "subsystem": "accel", 00:38:58.796 "config": [ 00:38:58.796 { 00:38:58.796 "method": "accel_set_options", 00:38:58.796 "params": { 00:38:58.796 "small_cache_size": 128, 00:38:58.796 "large_cache_size": 16, 00:38:58.796 "task_count": 2048, 00:38:58.796 "sequence_count": 2048, 00:38:58.796 "buf_count": 2048 00:38:58.796 } 00:38:58.796 } 00:38:58.796 ] 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "subsystem": "bdev", 00:38:58.796 "config": [ 00:38:58.796 { 00:38:58.796 "method": "bdev_set_options", 00:38:58.796 "params": { 00:38:58.796 "bdev_io_pool_size": 65535, 00:38:58.796 "bdev_io_cache_size": 256, 00:38:58.796 "bdev_auto_examine": true, 00:38:58.796 "iobuf_small_cache_size": 128, 00:38:58.796 "iobuf_large_cache_size": 16 00:38:58.796 } 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "method": "bdev_raid_set_options", 00:38:58.796 "params": { 00:38:58.796 "process_window_size_kb": 1024 00:38:58.796 } 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "method": "bdev_iscsi_set_options", 00:38:58.796 "params": { 00:38:58.796 "timeout_sec": 30 00:38:58.796 } 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "method": "bdev_nvme_set_options", 00:38:58.796 "params": { 00:38:58.796 "action_on_timeout": "none", 00:38:58.796 "timeout_us": 0, 00:38:58.796 "timeout_admin_us": 0, 00:38:58.796 "keep_alive_timeout_ms": 10000, 00:38:58.796 "arbitration_burst": 0, 00:38:58.796 "low_priority_weight": 0, 00:38:58.796 "medium_priority_weight": 0, 00:38:58.796 "high_priority_weight": 0, 00:38:58.796 "nvme_adminq_poll_period_us": 10000, 00:38:58.796 "nvme_ioq_poll_period_us": 0, 00:38:58.796 "io_queue_requests": 512, 00:38:58.796 "delay_cmd_submit": true, 00:38:58.796 "transport_retry_count": 4, 00:38:58.796 "bdev_retry_count": 3, 00:38:58.796 "transport_ack_timeout": 0, 00:38:58.796 "ctrlr_loss_timeout_sec": 0, 00:38:58.796 "reconnect_delay_sec": 0, 00:38:58.796 "fast_io_fail_timeout_sec": 0, 00:38:58.796 "disable_auto_failback": false, 00:38:58.796 "generate_uuids": false, 00:38:58.796 "transport_tos": 0, 00:38:58.796 "nvme_error_stat": false, 00:38:58.796 "rdma_srq_size": 0, 00:38:58.796 "io_path_stat": false, 00:38:58.796 "allow_accel_sequence": false, 00:38:58.796 "rdma_max_cq_size": 0, 00:38:58.796 "rdma_cm_event_timeout_ms": 0, 00:38:58.796 "dhchap_digests": [ 00:38:58.796 "sha256", 00:38:58.796 "sha384", 00:38:58.796 "sha512" 00:38:58.796 ], 00:38:58.796 "dhchap_dhgroups": [ 00:38:58.796 "null", 00:38:58.796 "ffdhe2048", 00:38:58.796 "ffdhe3072", 00:38:58.796 "ffdhe4096", 00:38:58.796 "ffdhe6144", 00:38:58.796 "ffdhe8192" 00:38:58.796 ] 00:38:58.796 } 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "method": "bdev_nvme_attach_controller", 00:38:58.796 "params": { 00:38:58.796 "name": "nvme0", 00:38:58.796 "trtype": "TCP", 00:38:58.796 "adrfam": "IPv4", 00:38:58.796 "traddr": "127.0.0.1", 00:38:58.796 "trsvcid": "4420", 00:38:58.796 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:58.796 "prchk_reftag": false, 00:38:58.796 "prchk_guard": false, 00:38:58.796 "ctrlr_loss_timeout_sec": 0, 00:38:58.796 "reconnect_delay_sec": 0, 00:38:58.796 "fast_io_fail_timeout_sec": 0, 00:38:58.796 "psk": "key0", 00:38:58.796 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:58.796 "hdgst": false, 00:38:58.796 "ddgst": false 00:38:58.796 } 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "method": "bdev_nvme_set_hotplug", 00:38:58.796 "params": { 00:38:58.796 "period_us": 100000, 00:38:58.796 "enable": false 00:38:58.796 } 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "method": "bdev_wait_for_examine" 00:38:58.796 } 00:38:58.796 ] 00:38:58.796 }, 00:38:58.796 { 00:38:58.796 "subsystem": "nbd", 00:38:58.796 "config": [] 00:38:58.796 } 00:38:58.796 ] 00:38:58.796 }' 00:38:58.796 [2024-06-08 01:04:16.899394] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:38:58.796 [2024-06-08 01:04:16.899461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724761 ] 00:38:58.796 EAL: No free 2048 kB hugepages reported on node 1 00:38:58.796 [2024-06-08 01:04:16.974009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.796 [2024-06-08 01:04:17.027339] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.057 [2024-06-08 01:04:17.169142] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:59.628 01:04:17 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:59.628 01:04:17 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:38:59.628 01:04:17 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:38:59.628 01:04:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.628 01:04:17 keyring_file -- keyring/file.sh@120 -- # jq length 00:38:59.628 01:04:17 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:38:59.628 01:04:17 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:38:59.628 01:04:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:59.628 01:04:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:59.628 01:04:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.628 01:04:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.628 01:04:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:59.888 01:04:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:59.888 01:04:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:38:59.888 01:04:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:59.888 01:04:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:59.888 01:04:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:59.888 01:04:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.888 01:04:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.888 01:04:18 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:38:59.888 01:04:18 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:38:59.888 01:04:18 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:38:59.888 01:04:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:00.148 01:04:18 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:39:00.148 01:04:18 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:00.148 01:04:18 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.OXhewCNadl /tmp/tmp.NRzsDfc2gq 00:39:00.148 01:04:18 keyring_file -- keyring/file.sh@20 -- # killprocess 724761 00:39:00.148 01:04:18 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 724761 ']' 00:39:00.148 01:04:18 keyring_file -- common/autotest_common.sh@953 -- # kill -0 724761 00:39:00.148 01:04:18 keyring_file -- common/autotest_common.sh@954 -- # uname 00:39:00.148 01:04:18 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:00.148 01:04:18 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 724761 00:39:00.148 01:04:18 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:00.148 01:04:18 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:00.148 01:04:18 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 724761' 00:39:00.148 killing process with pid 724761 00:39:00.148 01:04:18 keyring_file -- common/autotest_common.sh@968 -- # kill 724761 00:39:00.148 Received shutdown signal, test time was about 1.000000 seconds 00:39:00.148 00:39:00.148 Latency(us) 00:39:00.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.148 =================================================================================================================== 00:39:00.148 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:00.148 01:04:18 keyring_file -- common/autotest_common.sh@973 -- # wait 724761 00:39:00.409 01:04:18 keyring_file -- keyring/file.sh@21 -- # killprocess 722987 00:39:00.409 01:04:18 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 722987 ']' 00:39:00.409 01:04:18 keyring_file -- common/autotest_common.sh@953 -- # kill -0 722987 00:39:00.409 01:04:18 keyring_file -- common/autotest_common.sh@954 -- # uname 00:39:00.409 01:04:18 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:00.409 01:04:18 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 722987 00:39:00.409 01:04:18 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:00.409 01:04:18 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:00.409 01:04:18 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 722987' 00:39:00.409 killing process with pid 722987 00:39:00.409 01:04:18 keyring_file -- common/autotest_common.sh@968 -- # kill 722987 00:39:00.409 [2024-06-08 01:04:18.521477] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:39:00.409 01:04:18 keyring_file -- common/autotest_common.sh@973 -- # wait 722987 00:39:00.670 00:39:00.670 real 0m11.084s 00:39:00.670 user 0m26.010s 00:39:00.670 sys 0m2.515s 00:39:00.670 01:04:18 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:00.670 01:04:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.670 ************************************ 00:39:00.670 END TEST keyring_file 00:39:00.670 ************************************ 00:39:00.670 01:04:18 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:39:00.670 01:04:18 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:00.670 01:04:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:39:00.670 01:04:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:00.670 01:04:18 -- common/autotest_common.sh@10 -- # set +x 00:39:00.670 ************************************ 00:39:00.670 START TEST keyring_linux 00:39:00.670 ************************************ 00:39:00.670 01:04:18 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:00.670 * Looking for test storage... 00:39:00.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:00.670 01:04:18 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:00.670 01:04:18 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.670 01:04:18 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.670 01:04:18 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.670 01:04:18 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.670 01:04:18 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.670 01:04:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.670 01:04:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.671 01:04:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.671 01:04:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:00.671 01:04:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:00.671 01:04:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:00.671 01:04:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:00.671 01:04:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:00.671 01:04:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:00.671 01:04:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:00.671 01:04:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:00.671 01:04:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:00.671 01:04:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:00.671 01:04:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:00.671 01:04:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:00.671 01:04:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:00.671 01:04:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:00.671 01:04:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:00.671 01:04:18 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:00.932 01:04:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:00.932 01:04:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:00.932 /tmp/:spdk-test:key0 00:39:00.932 01:04:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:00.932 01:04:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:00.932 01:04:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:00.932 01:04:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:00.932 01:04:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:00.932 01:04:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:00.932 01:04:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:00.932 01:04:18 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:00.932 01:04:18 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:00.932 01:04:18 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:00.932 01:04:18 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:00.932 01:04:18 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:00.932 01:04:18 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:00.932 01:04:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:00.932 01:04:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:00.932 /tmp/:spdk-test:key1 00:39:00.932 01:04:19 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:00.932 01:04:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=725330 00:39:00.932 01:04:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 725330 00:39:00.932 01:04:19 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 725330 ']' 00:39:00.932 01:04:19 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.932 01:04:19 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:00.932 01:04:19 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.932 01:04:19 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:00.932 01:04:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:00.932 [2024-06-08 01:04:19.062496] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:39:00.932 [2024-06-08 01:04:19.062573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725330 ] 00:39:00.932 EAL: No free 2048 kB hugepages reported on node 1 00:39:00.932 [2024-06-08 01:04:19.129355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.932 [2024-06-08 01:04:19.204451] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.874 01:04:19 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:01.875 01:04:19 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:39:01.875 01:04:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:01.875 01:04:19 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:01.875 01:04:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:01.875 [2024-06-08 01:04:19.847012] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.875 null0 00:39:01.875 [2024-06-08 01:04:19.879061] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:01.875 [2024-06-08 01:04:19.879438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:01.875 01:04:19 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:01.875 01:04:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:01.875 129678814 00:39:01.875 01:04:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:01.875 489344678 00:39:01.875 01:04:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=725464 00:39:01.875 01:04:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 725464 /var/tmp/bperf.sock 00:39:01.875 01:04:19 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:01.875 01:04:19 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 725464 ']' 00:39:01.875 01:04:19 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:01.875 01:04:19 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:01.875 01:04:19 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:01.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:01.875 01:04:19 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:01.875 01:04:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:01.875 [2024-06-08 01:04:19.952604] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:39:01.875 [2024-06-08 01:04:19.952648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725464 ] 00:39:01.875 EAL: No free 2048 kB hugepages reported on node 1 00:39:01.875 [2024-06-08 01:04:20.027812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.875 [2024-06-08 01:04:20.083217] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.446 01:04:20 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:02.446 01:04:20 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:39:02.446 01:04:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:02.446 01:04:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:02.706 01:04:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:02.706 01:04:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:02.966 01:04:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:02.966 01:04:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:02.966 [2024-06-08 01:04:21.185736] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:03.226 nvme0n1 00:39:03.226 01:04:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:03.226 01:04:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:03.226 01:04:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:03.226 01:04:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:03.226 01:04:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:03.226 01:04:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.226 01:04:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:03.226 01:04:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:03.226 01:04:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:03.226 01:04:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:03.226 01:04:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:03.226 01:04:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:03.226 01:04:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.516 01:04:21 keyring_linux -- keyring/linux.sh@25 -- # sn=129678814 00:39:03.516 01:04:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:03.516 01:04:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:03.516 01:04:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 129678814 == \1\2\9\6\7\8\8\1\4 ]] 00:39:03.516 01:04:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 129678814 00:39:03.516 01:04:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:03.516 01:04:21 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:03.516 Running I/O for 1 seconds... 00:39:04.458 00:39:04.458 Latency(us) 00:39:04.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:04.458 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:04.458 nvme0n1 : 1.02 7397.93 28.90 0.00 0.00 17140.54 9175.04 19988.48 00:39:04.458 =================================================================================================================== 00:39:04.458 Total : 7397.93 28.90 0.00 0.00 17140.54 9175.04 19988.48 00:39:04.458 0 00:39:04.458 01:04:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:04.458 01:04:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:04.719 01:04:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:04.719 01:04:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:04.719 01:04:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:04.719 01:04:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:04.719 01:04:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:04.719 01:04:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:04.982 01:04:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:04.982 [2024-06-08 01:04:23.176053] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:04.982 [2024-06-08 01:04:23.176774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1864560 (107): Transport endpoint is not connected 00:39:04.982 [2024-06-08 01:04:23.177770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1864560 (9): Bad file descriptor 00:39:04.982 [2024-06-08 01:04:23.178771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:04.982 [2024-06-08 01:04:23.178778] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:04.982 [2024-06-08 01:04:23.178783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:04.982 request: 00:39:04.982 { 00:39:04.982 "name": "nvme0", 00:39:04.982 "trtype": "tcp", 00:39:04.982 "traddr": "127.0.0.1", 00:39:04.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:04.982 "adrfam": "ipv4", 00:39:04.982 "trsvcid": "4420", 00:39:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:04.982 "psk": ":spdk-test:key1", 00:39:04.982 "method": "bdev_nvme_attach_controller", 00:39:04.982 "req_id": 1 00:39:04.982 } 00:39:04.982 Got JSON-RPC error response 00:39:04.982 response: 00:39:04.982 { 00:39:04.982 "code": -5, 00:39:04.982 "message": "Input/output error" 00:39:04.982 } 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@33 -- # sn=129678814 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 129678814 00:39:04.982 1 links removed 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@33 -- # sn=489344678 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 489344678 00:39:04.982 1 links removed 00:39:04.982 01:04:23 keyring_linux -- keyring/linux.sh@41 -- # killprocess 725464 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 725464 ']' 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 725464 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:04.982 01:04:23 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 725464 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 725464' 00:39:05.243 killing process with pid 725464 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@968 -- # kill 725464 00:39:05.243 Received shutdown signal, test time was about 1.000000 seconds 00:39:05.243 00:39:05.243 Latency(us) 00:39:05.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:05.243 =================================================================================================================== 00:39:05.243 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@973 -- # wait 725464 00:39:05.243 01:04:23 keyring_linux -- keyring/linux.sh@42 -- # killprocess 725330 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 725330 ']' 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 725330 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 725330 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 725330' 00:39:05.243 killing process with pid 725330 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@968 -- # kill 725330 00:39:05.243 01:04:23 keyring_linux -- common/autotest_common.sh@973 -- # wait 725330 00:39:05.504 00:39:05.504 real 0m4.835s 00:39:05.504 user 0m8.236s 00:39:05.504 sys 0m1.370s 00:39:05.504 01:04:23 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:05.504 01:04:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:05.504 ************************************ 00:39:05.504 END TEST keyring_linux 00:39:05.504 ************************************ 00:39:05.504 01:04:23 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:39:05.504 01:04:23 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:39:05.504 01:04:23 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:39:05.504 01:04:23 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:39:05.504 01:04:23 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:39:05.504 01:04:23 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:39:05.504 01:04:23 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:39:05.504 01:04:23 -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:05.504 01:04:23 -- common/autotest_common.sh@10 -- # set +x 00:39:05.504 01:04:23 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:39:05.504 01:04:23 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:39:05.504 01:04:23 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:39:05.504 01:04:23 -- common/autotest_common.sh@10 -- # set +x 00:39:13.650 INFO: APP EXITING 00:39:13.650 INFO: killing all VMs 00:39:13.650 INFO: killing vhost app 00:39:13.650 WARN: no vhost pid file found 00:39:13.650 INFO: EXIT DONE 00:39:16.195 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:16.195 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:16.195 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:16.195 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:16.195 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:16.195 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:16.195 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:16.456 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:16.456 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:16.456 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:16.456 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:16.456 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:16.456 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:16.456 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:16.456 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:16.456 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:16.456 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:19.758 Cleaning 00:39:19.758 Removing: /var/run/dpdk/spdk0/config 00:39:19.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:19.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:19.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:19.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:19.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:19.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:19.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:19.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:19.758 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:19.758 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:19.758 Removing: /var/run/dpdk/spdk1/config 00:39:19.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:19.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:19.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:19.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:19.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:19.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:19.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:19.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:19.758 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:19.758 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:19.758 Removing: /var/run/dpdk/spdk1/mp_socket 00:39:19.758 Removing: /var/run/dpdk/spdk2/config 00:39:19.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:19.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:19.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:19.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:19.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:19.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:19.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:19.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:19.758 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:19.758 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:19.758 Removing: /var/run/dpdk/spdk3/config 00:39:19.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:19.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:19.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:19.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:19.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:19.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:19.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:19.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:19.758 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:19.758 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:19.758 Removing: /var/run/dpdk/spdk4/config 00:39:19.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:19.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:19.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:19.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:19.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:19.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:19.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:19.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:19.758 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:20.019 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:20.019 Removing: /dev/shm/bdev_svc_trace.1 00:39:20.019 Removing: /dev/shm/nvmf_trace.0 00:39:20.019 Removing: /dev/shm/spdk_tgt_trace.pid184650 00:39:20.019 Removing: /var/run/dpdk/spdk0 00:39:20.019 Removing: /var/run/dpdk/spdk1 00:39:20.019 Removing: /var/run/dpdk/spdk2 00:39:20.019 Removing: /var/run/dpdk/spdk3 00:39:20.019 Removing: /var/run/dpdk/spdk4 00:39:20.019 Removing: /var/run/dpdk/spdk_pid183171 00:39:20.019 Removing: /var/run/dpdk/spdk_pid184650 00:39:20.019 Removing: /var/run/dpdk/spdk_pid185493 00:39:20.019 Removing: /var/run/dpdk/spdk_pid186525 00:39:20.019 Removing: /var/run/dpdk/spdk_pid186870 00:39:20.019 Removing: /var/run/dpdk/spdk_pid187937 00:39:20.019 Removing: /var/run/dpdk/spdk_pid188069 00:39:20.019 Removing: /var/run/dpdk/spdk_pid188388 00:39:20.019 Removing: /var/run/dpdk/spdk_pid189515 00:39:20.019 Removing: /var/run/dpdk/spdk_pid189973 00:39:20.019 Removing: /var/run/dpdk/spdk_pid190361 00:39:20.019 Removing: /var/run/dpdk/spdk_pid190744 00:39:20.019 Removing: /var/run/dpdk/spdk_pid191148 00:39:20.019 Removing: /var/run/dpdk/spdk_pid191398 00:39:20.019 Removing: /var/run/dpdk/spdk_pid191589 00:39:20.019 Removing: /var/run/dpdk/spdk_pid191935 00:39:20.019 Removing: /var/run/dpdk/spdk_pid192319 00:39:20.019 Removing: /var/run/dpdk/spdk_pid193453 00:39:20.019 Removing: /var/run/dpdk/spdk_pid196958 00:39:20.019 Removing: /var/run/dpdk/spdk_pid197142 00:39:20.019 Removing: /var/run/dpdk/spdk_pid197459 00:39:20.019 Removing: /var/run/dpdk/spdk_pid197705 00:39:20.019 Removing: /var/run/dpdk/spdk_pid198079 00:39:20.019 Removing: /var/run/dpdk/spdk_pid198387 00:39:20.019 Removing: /var/run/dpdk/spdk_pid198789 00:39:20.019 Removing: /var/run/dpdk/spdk_pid198926 00:39:20.019 Removing: /var/run/dpdk/spdk_pid199177 00:39:20.019 Removing: /var/run/dpdk/spdk_pid199498 00:39:20.019 Removing: /var/run/dpdk/spdk_pid199603 00:39:20.019 Removing: /var/run/dpdk/spdk_pid199869 00:39:20.019 Removing: /var/run/dpdk/spdk_pid200310 00:39:20.019 Removing: /var/run/dpdk/spdk_pid200663 00:39:20.019 Removing: /var/run/dpdk/spdk_pid200990 00:39:20.019 Removing: /var/run/dpdk/spdk_pid201172 00:39:20.019 Removing: /var/run/dpdk/spdk_pid201378 00:39:20.019 Removing: /var/run/dpdk/spdk_pid201511 00:39:20.019 Removing: /var/run/dpdk/spdk_pid201864 00:39:20.019 Removing: /var/run/dpdk/spdk_pid202075 00:39:20.019 Removing: /var/run/dpdk/spdk_pid202268 00:39:20.019 Removing: /var/run/dpdk/spdk_pid202605 00:39:20.019 Removing: /var/run/dpdk/spdk_pid202952 00:39:20.019 Removing: /var/run/dpdk/spdk_pid203301 00:39:20.019 Removing: /var/run/dpdk/spdk_pid203546 00:39:20.019 Removing: /var/run/dpdk/spdk_pid203735 00:39:20.019 Removing: /var/run/dpdk/spdk_pid204042 00:39:20.019 Removing: /var/run/dpdk/spdk_pid204395 00:39:20.019 Removing: /var/run/dpdk/spdk_pid204744 00:39:20.019 Removing: /var/run/dpdk/spdk_pid205027 00:39:20.019 Removing: /var/run/dpdk/spdk_pid205223 00:39:20.019 Removing: /var/run/dpdk/spdk_pid205484 00:39:20.019 Removing: /var/run/dpdk/spdk_pid205833 00:39:20.019 Removing: /var/run/dpdk/spdk_pid206189 00:39:20.019 Removing: /var/run/dpdk/spdk_pid206484 00:39:20.019 Removing: /var/run/dpdk/spdk_pid206679 00:39:20.019 Removing: /var/run/dpdk/spdk_pid206933 00:39:20.019 Removing: /var/run/dpdk/spdk_pid207285 00:39:20.019 Removing: /var/run/dpdk/spdk_pid207460 00:39:20.019 Removing: /var/run/dpdk/spdk_pid207785 00:39:20.019 Removing: /var/run/dpdk/spdk_pid212213 00:39:20.019 Removing: /var/run/dpdk/spdk_pid309235 00:39:20.019 Removing: /var/run/dpdk/spdk_pid314369 00:39:20.019 Removing: /var/run/dpdk/spdk_pid326559 00:39:20.019 Removing: /var/run/dpdk/spdk_pid332926 00:39:20.019 Removing: /var/run/dpdk/spdk_pid337711 00:39:20.019 Removing: /var/run/dpdk/spdk_pid338510 00:39:20.019 Removing: /var/run/dpdk/spdk_pid355208 00:39:20.019 Removing: /var/run/dpdk/spdk_pid355580 00:39:20.019 Removing: /var/run/dpdk/spdk_pid360582 00:39:20.019 Removing: /var/run/dpdk/spdk_pid367522 00:39:20.019 Removing: /var/run/dpdk/spdk_pid371104 00:39:20.019 Removing: /var/run/dpdk/spdk_pid383094 00:39:20.019 Removing: /var/run/dpdk/spdk_pid393765 00:39:20.019 Removing: /var/run/dpdk/spdk_pid395836 00:39:20.019 Removing: /var/run/dpdk/spdk_pid397023 00:39:20.019 Removing: /var/run/dpdk/spdk_pid417056 00:39:20.020 Removing: /var/run/dpdk/spdk_pid421431 00:39:20.020 Removing: /var/run/dpdk/spdk_pid453104 00:39:20.280 Removing: /var/run/dpdk/spdk_pid458485 00:39:20.280 Removing: /var/run/dpdk/spdk_pid460473 00:39:20.280 Removing: /var/run/dpdk/spdk_pid462627 00:39:20.280 Removing: /var/run/dpdk/spdk_pid462837 00:39:20.280 Removing: /var/run/dpdk/spdk_pid463177 00:39:20.280 Removing: /var/run/dpdk/spdk_pid463346 00:39:20.280 Removing: /var/run/dpdk/spdk_pid463913 00:39:20.280 Removing: /var/run/dpdk/spdk_pid466207 00:39:20.280 Removing: /var/run/dpdk/spdk_pid467586 00:39:20.280 Removing: /var/run/dpdk/spdk_pid468268 00:39:20.280 Removing: /var/run/dpdk/spdk_pid470809 00:39:20.280 Removing: /var/run/dpdk/spdk_pid471578 00:39:20.280 Removing: /var/run/dpdk/spdk_pid472394 00:39:20.280 Removing: /var/run/dpdk/spdk_pid477230 00:39:20.280 Removing: /var/run/dpdk/spdk_pid483781 00:39:20.280 Removing: /var/run/dpdk/spdk_pid489515 00:39:20.280 Removing: /var/run/dpdk/spdk_pid534393 00:39:20.280 Removing: /var/run/dpdk/spdk_pid539125 00:39:20.280 Removing: /var/run/dpdk/spdk_pid546233 00:39:20.280 Removing: /var/run/dpdk/spdk_pid547780 00:39:20.280 Removing: /var/run/dpdk/spdk_pid549435 00:39:20.280 Removing: /var/run/dpdk/spdk_pid554539 00:39:20.280 Removing: /var/run/dpdk/spdk_pid559818 00:39:20.280 Removing: /var/run/dpdk/spdk_pid568713 00:39:20.280 Removing: /var/run/dpdk/spdk_pid568833 00:39:20.280 Removing: /var/run/dpdk/spdk_pid573604 00:39:20.280 Removing: /var/run/dpdk/spdk_pid573920 00:39:20.280 Removing: /var/run/dpdk/spdk_pid574246 00:39:20.280 Removing: /var/run/dpdk/spdk_pid574590 00:39:20.280 Removing: /var/run/dpdk/spdk_pid574688 00:39:20.280 Removing: /var/run/dpdk/spdk_pid575955 00:39:20.280 Removing: /var/run/dpdk/spdk_pid577951 00:39:20.280 Removing: /var/run/dpdk/spdk_pid579954 00:39:20.280 Removing: /var/run/dpdk/spdk_pid581948 00:39:20.280 Removing: /var/run/dpdk/spdk_pid583945 00:39:20.280 Removing: /var/run/dpdk/spdk_pid585837 00:39:20.280 Removing: /var/run/dpdk/spdk_pid592982 00:39:20.280 Removing: /var/run/dpdk/spdk_pid593695 00:39:20.280 Removing: /var/run/dpdk/spdk_pid594752 00:39:20.280 Removing: /var/run/dpdk/spdk_pid596103 00:39:20.280 Removing: /var/run/dpdk/spdk_pid602239 00:39:20.280 Removing: /var/run/dpdk/spdk_pid605568 00:39:20.280 Removing: /var/run/dpdk/spdk_pid612320 00:39:20.280 Removing: /var/run/dpdk/spdk_pid618814 00:39:20.280 Removing: /var/run/dpdk/spdk_pid628418 00:39:20.280 Removing: /var/run/dpdk/spdk_pid637073 00:39:20.280 Removing: /var/run/dpdk/spdk_pid637079 00:39:20.280 Removing: /var/run/dpdk/spdk_pid659333 00:39:20.280 Removing: /var/run/dpdk/spdk_pid660485 00:39:20.280 Removing: /var/run/dpdk/spdk_pid661165 00:39:20.280 Removing: /var/run/dpdk/spdk_pid661855 00:39:20.280 Removing: /var/run/dpdk/spdk_pid662912 00:39:20.280 Removing: /var/run/dpdk/spdk_pid663596 00:39:20.280 Removing: /var/run/dpdk/spdk_pid664278 00:39:20.280 Removing: /var/run/dpdk/spdk_pid664966 00:39:20.280 Removing: /var/run/dpdk/spdk_pid670005 00:39:20.281 Removing: /var/run/dpdk/spdk_pid670343 00:39:20.281 Removing: /var/run/dpdk/spdk_pid677375 00:39:20.281 Removing: /var/run/dpdk/spdk_pid677535 00:39:20.281 Removing: /var/run/dpdk/spdk_pid680244 00:39:20.281 Removing: /var/run/dpdk/spdk_pid687362 00:39:20.281 Removing: /var/run/dpdk/spdk_pid687367 00:39:20.281 Removing: /var/run/dpdk/spdk_pid693008 00:39:20.281 Removing: /var/run/dpdk/spdk_pid695407 00:39:20.281 Removing: /var/run/dpdk/spdk_pid697620 00:39:20.281 Removing: /var/run/dpdk/spdk_pid698978 00:39:20.281 Removing: /var/run/dpdk/spdk_pid701329 00:39:20.281 Removing: /var/run/dpdk/spdk_pid702791 00:39:20.281 Removing: /var/run/dpdk/spdk_pid713068 00:39:20.281 Removing: /var/run/dpdk/spdk_pid713715 00:39:20.281 Removing: /var/run/dpdk/spdk_pid714383 00:39:20.281 Removing: /var/run/dpdk/spdk_pid717179 00:39:20.281 Removing: /var/run/dpdk/spdk_pid717678 00:39:20.281 Removing: /var/run/dpdk/spdk_pid718350 00:39:20.281 Removing: /var/run/dpdk/spdk_pid722987 00:39:20.281 Removing: /var/run/dpdk/spdk_pid723216 00:39:20.281 Removing: /var/run/dpdk/spdk_pid724761 00:39:20.281 Removing: /var/run/dpdk/spdk_pid725330 00:39:20.281 Removing: /var/run/dpdk/spdk_pid725464 00:39:20.281 Clean 00:39:20.542 01:04:38 -- common/autotest_common.sh@1450 -- # return 0 00:39:20.542 01:04:38 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:39:20.542 01:04:38 -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:20.542 01:04:38 -- common/autotest_common.sh@10 -- # set +x 00:39:20.542 01:04:38 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:39:20.542 01:04:38 -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:20.542 01:04:38 -- common/autotest_common.sh@10 -- # set +x 00:39:20.542 01:04:38 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:20.542 01:04:38 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:20.542 01:04:38 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:20.542 01:04:38 -- spdk/autotest.sh@391 -- # hash lcov 00:39:20.542 01:04:38 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:39:20.542 01:04:38 -- spdk/autotest.sh@393 -- # hostname 00:39:20.542 01:04:38 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:20.803 geninfo: WARNING: invalid characters removed from testname! 00:39:47.388 01:05:01 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:47.388 01:05:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:47.991 01:05:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:49.390 01:05:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:51.301 01:05:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:52.687 01:05:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:54.071 01:05:12 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:54.071 01:05:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:54.071 01:05:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:54.071 01:05:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:54.071 01:05:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:54.071 01:05:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.071 01:05:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.071 01:05:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.071 01:05:12 -- paths/export.sh@5 -- $ export PATH 00:39:54.071 01:05:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.071 01:05:12 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:54.071 01:05:12 -- common/autobuild_common.sh@437 -- $ date +%s 00:39:54.071 01:05:12 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717801512.XXXXXX 00:39:54.071 01:05:12 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717801512.CyfF1T 00:39:54.071 01:05:12 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:39:54.071 01:05:12 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:39:54.071 01:05:12 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:39:54.071 01:05:12 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:54.071 01:05:12 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:54.071 01:05:12 -- common/autobuild_common.sh@453 -- $ get_config_params 00:39:54.071 01:05:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:39:54.072 01:05:12 -- common/autotest_common.sh@10 -- $ set +x 00:39:54.072 01:05:12 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:39:54.072 01:05:12 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:39:54.072 01:05:12 -- pm/common@17 -- $ local monitor 00:39:54.072 01:05:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:54.072 01:05:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:54.072 01:05:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:54.072 01:05:12 -- pm/common@21 -- $ date +%s 00:39:54.072 01:05:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:54.072 01:05:12 -- pm/common@25 -- $ sleep 1 00:39:54.072 01:05:12 -- pm/common@21 -- $ date +%s 00:39:54.072 01:05:12 -- pm/common@21 -- $ date +%s 00:39:54.072 01:05:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717801512 00:39:54.072 01:05:12 -- pm/common@21 -- $ date +%s 00:39:54.072 01:05:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717801512 00:39:54.072 01:05:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717801512 00:39:54.072 01:05:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717801512 00:39:54.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717801512_collect-cpu-load.pm.log 00:39:54.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717801512_collect-vmstat.pm.log 00:39:54.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717801512_collect-cpu-temp.pm.log 00:39:54.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717801512_collect-bmc-pm.bmc.pm.log 00:39:55.012 01:05:13 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:39:55.012 01:05:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:39:55.012 01:05:13 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:55.012 01:05:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:55.012 01:05:13 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:39:55.012 01:05:13 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:39:55.012 01:05:13 -- spdk/autopackage.sh@19 -- $ timing_finish 00:39:55.012 01:05:13 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:55.012 01:05:13 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:55.012 01:05:13 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:55.272 01:05:13 -- spdk/autopackage.sh@20 -- $ exit 0 00:39:55.272 01:05:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:55.272 01:05:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:55.272 01:05:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:55.272 01:05:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:55.272 01:05:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:39:55.272 01:05:13 -- pm/common@44 -- $ pid=738310 00:39:55.272 01:05:13 -- pm/common@50 -- $ kill -TERM 738310 00:39:55.272 01:05:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:55.272 01:05:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:39:55.272 01:05:13 -- pm/common@44 -- $ pid=738311 00:39:55.272 01:05:13 -- pm/common@50 -- $ kill -TERM 738311 00:39:55.272 01:05:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:55.272 01:05:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:39:55.272 01:05:13 -- pm/common@44 -- $ pid=738313 00:39:55.272 01:05:13 -- pm/common@50 -- $ kill -TERM 738313 00:39:55.272 01:05:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:55.272 01:05:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:39:55.272 01:05:13 -- pm/common@44 -- $ pid=738340 00:39:55.272 01:05:13 -- pm/common@50 -- $ sudo -E kill -TERM 738340 00:39:55.272 + [[ -n 64291 ]] 00:39:55.272 + sudo kill 64291 00:39:55.285 [Pipeline] } 00:39:55.303 [Pipeline] // stage 00:39:55.308 [Pipeline] } 00:39:55.325 [Pipeline] // timeout 00:39:55.331 [Pipeline] } 00:39:55.348 [Pipeline] // catchError 00:39:55.355 [Pipeline] } 00:39:55.372 [Pipeline] // wrap 00:39:55.378 [Pipeline] } 00:39:55.394 [Pipeline] // catchError 00:39:55.403 [Pipeline] stage 00:39:55.405 [Pipeline] { (Epilogue) 00:39:55.420 [Pipeline] catchError 00:39:55.422 [Pipeline] { 00:39:55.436 [Pipeline] echo 00:39:55.438 Cleanup processes 00:39:55.443 [Pipeline] sh 00:39:55.731 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:55.732 738426 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:39:55.732 738859 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:55.747 [Pipeline] sh 00:39:56.035 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:56.035 ++ grep -v 'sudo pgrep' 00:39:56.035 ++ awk '{print $1}' 00:39:56.035 + sudo kill -9 738426 00:39:56.048 [Pipeline] sh 00:39:56.333 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:08.582 [Pipeline] sh 00:40:08.907 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:08.907 Artifacts sizes are good 00:40:08.923 [Pipeline] archiveArtifacts 00:40:08.930 Archiving artifacts 00:40:09.198 [Pipeline] sh 00:40:09.485 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:09.501 [Pipeline] cleanWs 00:40:09.511 [WS-CLEANUP] Deleting project workspace... 00:40:09.511 [WS-CLEANUP] Deferred wipeout is used... 00:40:09.519 [WS-CLEANUP] done 00:40:09.521 [Pipeline] } 00:40:09.541 [Pipeline] // catchError 00:40:09.554 [Pipeline] sh 00:40:09.839 + logger -p user.info -t JENKINS-CI 00:40:09.849 [Pipeline] } 00:40:09.863 [Pipeline] // stage 00:40:09.868 [Pipeline] } 00:40:09.882 [Pipeline] // node 00:40:09.887 [Pipeline] End of Pipeline 00:40:09.933 Finished: SUCCESS